Fall Dr. Jay Wilson
ECUR 809.3-83551 Models & Methods for Evaluation of Educational Programs.
Calendar: 9 am 250 pm
September 5 - 26
October 17
November 21
ECUR 990- 82712 Curriculum Research Dr Janet McVittie --- Education Building 10 Sep 03, 2009 - Dec 04, 2009 Seminar
http://www.usask.ca/education/people/mcvittiej.htm
http://www.usask.ca/education/coursework/mcvittiej/edcur322.html
http://www.usask.ca/education/coursework/mcvittiej/resources/index.html
Elluminate sessions: September 19 - October 3
November 7 and 21 -
January 16, February 27
March 13 - March 27
GRS 960-86150 Ethics & Integrity Dr. Diane J. Martz
http://www.spheru.ca/spheru-1/research-tem/dr.-diane-martz/dr.-diane-martz/?searchterm=Martz
Blackboard 2 hrs take a test until you get CR Sep 03, 2009 - Dec 04, 2009 Supervised Self Instruction
Research by Dr Martz: http://www.spheru.ca/research-projects/rural-youth-risk-behaviours-and-healthy...
Winter
ECUR 991-27193 Scholarship in Teaching - Portfolios Dr Timothy. Molnar http://www.usask.ca/education/people/molnart.htm
Class 9:00 am - 11:50 am S Education Building 3133
Jan 04, 2010 - Apr 08, 2010 Seminar
Saturday
9:00-11:50pm
Saturday, 29 August 2009
Wednesday, 26 August 2009
Articles Reviews by Nelson Dordelly-Rosales
Article Review # 1
Bringing the background to the foreground: what do classroom environment that support authentic discussions look like?
References:
Hadjioannou, X. (2007). Bringing the Background to the Foreground: What do Classroom
Environment that Support Authentic Discussions Look Like? American Educational
Research Journal; 44 (2), 370-399
Mapiasse, S. (2007). Influence of the democratic climate of classrooms on student civic learning in North Sulawesi, Indonesia [Electronic version]. International Education Journal, 8 (2), 393-407.
Overview
Hadjionannou (2007) focused on authentic or dialogic discussions in the classroom. Authentic discussions are a classroom-based speech genre in which participants commonly explore issues of interest by articulating ideas and opinions. A case study research is done to shed light on “authentic discussions” using different qualitative approaches such as recorded class sessions, interviews, and field notes. The researcher identified seven elements that appeared to be related to the student’s involvement in classroom activities and to the social relationships among community members. Those elements were physical environment, curricular demands and enacted curriculum, teacher beliefs, student beliefs about discussions, relationships among members, classroom procedures, and norms of classroom participation.
Problem/Issue and the Importance/Significance
Hadjionannou (2007) aimed to answer the question what do classroom environments that support authentic discussions look like? The study examines the features of the environment of a fifth grade classroom community. The author reported part of a wider qualitative research that sought (a) to examine the issue of interpersonal relationships within the classroom, specifically to analyze the texture of talk in the authentic discussions of the community under study, (b) to explore participant perspectives, and (c) to evaluate the classroom environment.
Research Question
What are the features of the classroom environment of this discourse community that frequently used authentic discussions?
Sample and sample selection process
The community under study was a fifth-grade class of 24 students and their teacher. The fifth-grade classroom community under study was part of Grassroots Elementary School (pseudonym), a quintessentially middle-class school in a midsize town in Florida.
Data Collection Method/ Data Analysis Method
Data collection included observation, participant interviews, and audio and video recordings of class sessions during a five month period on an almost daily basis. The process of identifying the major elements of the classroom environment was a generative one, and it began with the initial coding of the field notes and the interview transcripts. Each individual was interviewed four times, using a flexible interview protocol: the author audio and video-recorded four book talk sessions, which were transcribed verbatim and analyzed through discourse-analysis. The goal was to use a database with highly contextualized descriptors to systematically illustrate the content of the data. The findings of the discourse analysis were used primarily for describing the texture of talk in authentic discussions, but also capturing the elements of the classroom environment in action.
Trustworthiness/Validity Considerations
In addressing credibility, Hadjionannou (2007) presented a detailed picture of the phenomenon under scrutiny. The investigator provided sufficient detail of the context of the field-work, identified the elements that seemed to shape the environment of the classroom community under study, and described “how those elements functioned as repeatable threads woven to create the fabric of the classroom’s social life.” (p. 374). The researcher suggested that reproducing the environment described in this study in another classroom would be impossible because “the environments in communities are in constant flux, and they are shaped by the personalities and the agendas of community members as well as by the unique circumstances of each community” (p.396). However, through dialogic discussions in the classroom and cultivating amiable relations between the students, teachers can provide opportunities for student self-expression, lively interactions, and substantive collaboration in any classroom.
Ethical Issues
Confidentiality and anonymity was guaranteed to participants of the study. In the ethics literature, confidentiality is commonly viewed as akin to the principle of privacy. In this study the researcher used a pseudonym to identify the institution object of study or elementary school.
Reflection - Questions
Were the studies of value? Why or why not? Mappiasse (2007) examined the influence of the democratic climate of classrooms on student civic learning in North Sulawesi, Indonesia, and analyzed seven dimensions that support democratic climate in the classroom: active participation, avoidance of textbook dominated instruction, reflective thinking, student decision-making and problem-solving choices, controversial issues, recognition of human dignity, and relevance. Hadjionannou (2007) explored the environment of the fifth-grade classroom community in Florida and analyzed the elements that support authentic discussions. Those elements were physical environment, curricular demands and enacted curriculum, teacher beliefs, student beliefs about discussions, relationships among members, classroom procedures, and norms of classroom participation. Both studies identified the elements that seem to shape the environment of the classroom and described how those elements functioned as repeatable threads woven to create the fabric of authentic discussions and a democratic climate in the classroom’s social life. The two studies provide insights of great value for teaching and learning.
What were the strengths of the two studies? The results indicated that the democratic climate and authentic discussions have significant effects on student engagement, student knowledge and interpretation skill. Mappiasse (2007) centered in the advantages of democratic environment in the classroom; Hadjionannou (2007 emphasized in the importance of interpersonal and social interaction among students and teachers. The classroom environment is extremely important to effective teaching and learning. These studies described in great detail different indicators of a good classroom social environment; they also are good examples of qualitative research.
What were the limitations of the two studies? Subjects in each study were students of only one institution. Therefore, the results were limited in their applicability to other institutions. Similar research studies should be repeated in other institutions and in different subjects to determine whether those aspects of the classroom environment that appeared essential to effective teaching are similar to those obtained in these studies.
How would you have changed the two studies to improve the quality of the research? For the first study, I would enlarge the sample size and I would add a questionnaire for data collection. For the second study, I would add participant interviews and audio and video recording of class sessions. In addition, for the first study, it would also be necessary to explain instrument validation and the reliability of items as the second study did.
How would you incorporate the findings of the two studies into your classroom? I would like to develop a similar qualitative research study selecting a convenience sample of schools in Venezuela. In research and teaching, I would incorporate the democratic environment using meaningful classroom activities. I would work toward knowing my students and use this knowledge to create positive, trusting, and respectful relationships with them.
It is important to make students engage in authentic dialogue or discussion and learning activities, especially in civic education classrooms that involve law and education. We should provide the students with opportunities to obtain deeper understanding of the civic values and enable them to implement democratic values critically and responsibly in their social interactions; that is, to engage individuals and groups in developing a clear statement of belief about what strong democracy would look like.
Article Review # 2 by Nelson Dordelly-Rosales
Investigating Self-Regulation and Motivation:
Historical Background, Methodological Developments, and Future Prospects
References:
Zimmerman, B. (2008). Investigating Self-regulation and Motivation:
Historical Background, Methodological Developments, and Future Prospects
American Educational Research Journal; 45 (1), 166-183
Zimmerman, B. (2002). Becoming a Self-regulated Learner: an Overview.
Retrieved June 2, 2009 from
http://findarticles.com/p/articles/mi_m0NQM/is_2_41/ai_90190493/
Overview
Zimmerman (2008) assessed students’ self regulated learning (SRL) online. The focus was on processes and motivational feelings or beliefs regarding learning in authentic contexts using computer ‘traces’ (or gStudy software), think-aloud protocols, diaries of studying, direct observation, and microanalyses measures. The results revealed that students in high SRL online classes were more engaged in their writing than students in low-SRL classes, and that students in the training group reported significantly greater increases in time management skill and self-reflection on their learning than those in the control group. Students in the self-regulation training condition also displayed increases in several measures of motivation. Their willingness to exert effort, their task interest, their learning-goal orientation, and their perceptions of self-efficacy all increased after training and their feelings of helplessness declined significantly. Students in “the self-regulation training group displayed significantly greater gains in math achievement than students in the control group” (p.175).
Problem/Issue and the Importance/Significance
The study defined the issue of innovative environment and how it impacts the students’ use of self regulatory processes during the course of learning. The study is significant because it enlightens the motivation and self-regulation process. One of the lessons for instructors and learners was that self-regulation strategy measure can predict students’ academic grades and their teachers’ ratings of their proactive efforts to learn in class.
Research Questions
The first question concerned the innovative software program (called “gStudy environment”), that is, how traces measure SRL as compared to self-reported measures. The researcher assessed changes in self-regulation during learning. The second question dealt with students’ levels of SRL in personally managed contexts, such as at home or in the library. The idea was to find out if students’ levels of SRL were linked to improvements in the students’ overall academic achievement. The third question involved whether teachers can modify their classrooms to foster increases in self-regulated learning. The fourth question concerned the role of students’ motivational feelings and beliefs in initiating and sustaining changes in their SRL.
Sample and sample selection process
Teachers were randomly assigned to either an experimental or a control group. Nine teachers were trained to convey the underlying cyclical model and to develop homework exercises, quizzes, and a final examination in arithmetic skill. The control group of eight teachers gave the same homework assignments and tests but received no self-regulation training. The students in both experimental conditions kept diary accounts of SRL events.
Data Collection/Analysis Methods
The author used innovative qualitative as well as quantitative methods that included teacher and student data collection and different analysis methods (observation forms, portfolio assessments, interviews, and questionnaires) to measure SRL. Teachers in the SRL training condition gave students a copy of the cyclical model of self-regulation along with a picture of a ‘learning expert’, who recommended self-regulatory practices that the teacher modeled for them. Students were given daily feedback and were encouraged to set challenging goals and choose a specific strategy for themselves. Students in the experimental group were given points on the basis of their homework answers. The students were assessed in their interests, attitudes, and self-related cognition before and after a five week training program. The students’ calibration of the accuracy of their achievement was significantly correlated with their actual posttest score.
Instructional and ethical issues
Technology is a tool that can change the nature of ESL. However, the role of the teacher and instructor is critical in providing guidance and support to self-regulated academic learning.
Reflection - Questions
Were the studies of value? Why or why not? In previous study, Becoming a Self-Regulated Learner, Zimmerman (2002) showed that self-regulation learning (ESL) is not a mental ability or an academic performance skill; rather it is the self-directive process by which learners transform their mental abilities into academic skills. The author identified how a student’s use of specific learning processes, level of self-awareness, and motivational beliefs combine to produce self-regulated learners. In the most recent study, Zimmerman (2007) showed that when compared to control students, SRL trained students displayed significant increases in homework effectiveness, time management skills, a broad array of self-reflection measures, and math performance skill (in fact, the self-regulation training group passed an entrance exam for admittance to a higher level school, which was an increase of 50% compared to past cohort). Both studies were of value
What were the strengths of the two studies? Zimmerman (2000) showed that self-regulated, independent learners take responsibility for what they learn and he analyzed how far they can go with this knowledge. In the second study Zimmerman (2007) showed that (1) “gStudy environment” can provide students with many more ways to self-regulate their learning than provided by traditional instructional software, (2) the “think-aloud methodology” is an effective way to assess students’ self-regulatory processes online, (3) training in self-regulation learning and time-management skills can be implemented by teachers as part of their classroom assignments and strategic planning, and (4) the “micro-analytic methodology” (used to improve athletic skills) for assessing SRL processes and sources of motivation (goal setting and strategic planning, self-reflection, predictive sources of motivation) improve self-regulation. The results showed that the experimental group reported significantly greater increases in time management skill and self-reflection on their learning, homework effectiveness, time management skills, a broad array of self-reflection measures, and math performance skill, than the control group.
What were the limitations of the two studies? In general, there are still raising new questions for future research: more research is needed regarding the accuracy of students’ reports of using self-regulatory processes. In trying to answer the global question: How do students become masters of their own learning processes? Zimmerman (2007 says that “there was not a standardized measure of students’ writing achievement, and this limitation precluded determination of the effects of students’ SRL on their writing competence” (176). Students in the high and low-SRL classes did not display significant differences in measures of motivation (beliefs, values etc), which is attributed to the ineffectiveness of the measures.
How would you have changed the two studies to improve the quality and usefulness of the research? I would follow Zimmerman’s research approach (2007) and would take his suggestions. There is a need to (a) extend the use of the four ways to assess the effectiveness of academic interventions designed to motivate recalcitrant students to engage on SRL, (b) extend a micro analytic methodology to learning academic tasks over longer periods of time when students’ motivation is expected to wane, (c) apply additional measures of motivation and feelings, such as anxiety and goal orientation, (d) extend the “think-aloud methodology” to see if planning and motivation will emerge as significant predictors of students’ mental models study.
How would I incorporate the findings of the two studies into your classroom? I would provide innovative environment (gStudy software, think-aloud protocols, diaries of studying, direct observation, microanalyses measures) so that students become masters of their own learning process: SRL a “proactive processes that students use to acquire academic skill, such as setting goals, selecting and deploying strategies, and self-monitoring one’s effectiveness” (166).
Article Review # 3 by Nelson Dordelly-Rosales
Students’ Perceptions of Characteristics of Effective College Teachers: A Validity Study of a Teaching Evaluation Form Using a Mixed-Methods Analysis
By Anthony J. Onwuegbuzie; Ann E Witcher; Kathleen M T Collin; Janet D Filer; e al
Reference(s):
Onwuegbuzie, A.J., Witcher, A. E., Collin, K.M., Filer, J.D., et al. (2007). Students’ Perceptions of Characteristics of Effective College Teachers: Validity Study of a Teaching Evaluation Form Using a Mixed-Methods Analysis. American Educational Research Journal; 44 (1), 113-160
Suwandee, A. (1995). Students' perceptions of university instructors' effective teaching characteristics [Electronic version]. Studies in Language and Language Teaching Journal,5, 6-22
Overview
Onwuegbuzie, et al. (2007) assessed the content-related validity and construct-related validity of the Teaching Evaluation Form (TEF). Using sequential-mixed methods analysis lead the researchers to the development of a more complete form, the CARE-RESPECTED Model of Teaching Evaluation (CRMTE), which includes three of the least represented themes of the TEF: student-centered, enthusiast, and ethical. The words consistency, fair evaluator, and respectful describe the item ethical. The CRMTE is a useful data-driven test that will benefit all stakeholders –college administrators, teachers, and, above all, students.
Problem/Issue and the Importance/Significance
The problem was students’ perceptions of characteristics of effective college teachers: a validity study of a teaching evaluation form using a mixed-methods analysis. To Onwuegbuzie, et al. (2007) “the TEFs (a) are developed atheoretically and (b) omit what students deem to be the most important characteristics of effective college teachers” (p.151). In an era in which information gleaned from TEFs is used to make decisions about faculty, this potential threat to validity is disturbing and warrants further research.
Research Questions
What themes reflect effective college teachers as identified by students? What students’ attributes affect perceptions of effective college teachers? What is the content-related validity and construct-related validity pertaining to a TEF?
Sample and sample selection process
Participants were 912 undergraduate and graduate students (out of 8,555 students enrolled) from various academic majors enrolled at a public university in a mid-southern state of the United States. The sample represented 10.7% of the total population and reflected 68 degree programs offered by the university.
Data Collection and Analysis Methods
This study used a multistage mixed-methods analysis to collect and to assess the content-related validity and construct-related validity of TEF. The researchers approached instructors/professors before the study began to solicit participation of their students and thus maximize participation rate. The researchers collected qualitative data (e.g., respondents’ perceptions of the questionnaire), and quantitative data (e.g., response rate information, missing data information) before the study began (plot phase) and used member checking techniques to assess the appropriateness of the questionnaire and the adequacy of the time allotted to complete it, after the major data collection phases. A sequential mixed-methods analysis (SMMA) was undertaken to analyze students’ responses. The process included: data reduction, data display, data transformation, data correlation, data consolidation, data comparison and data integration. This analysis, incorporating both inductive and deductive reasoning, employed qualitative and quantitative data-analytic techniques.
Limitations/Delimitations/Assumptions
Because the sample represented students at a single university whose perspectives about effective teachers were gathered at a single point in time, the extent to which the present findings are generalizable to students from other institutions is not clear.
Trustworthiness/Validity Considerations
The focus of the study was on population validity, ecological validity, temporal validity and adequate external validity. The findings cast some serious doubt on the content-related validity and construct-related validity of TEF scores (e.g., endorsement of most themes varied by student attribute: gender, age). The validity of responses might have been affected by the fact that “the students’ perceptions were assessed via a relatively brief-self-report instrument” (p.144).
Reflection - Questions
Were the studies of value? Why or why not? Both studies were of great value. In the study by Suwandee (1995) data was obtained from 505 university students in the Faculty of Science. The results indicated that students considered an effective teacher as one who has a good knowledge of his/her subject and applies pedagogical skills, making difficult topics easy to understand and explains clearly; his/her personality is generous, willing to help students in and out of the classroom; and his/her research-teaching background shows a well-prepared instructor for class. Onwuegbuzie and others (2007) identified characteristics that students considered effective college teaching comprising four metha-themes, which were the following: communicator, advocate, responsible, and empowering, and nine themes comprising the following descriptors: responsive, professional, expert, connector, transmitter, director, enthusiast, student centered, and ethical. The researchers developed the CARE-RESPECTED Model of Teaching Evaluation (CRMTE) that emerged from the study, which included the last three descriptors, which were not represented in the TEF. These two studies have added to the current yet scant body of literature regarding the score validity of TEFs.
What were the strengths of the two studies? The studies examined students’ perceptions of characteristics of effective college teachers and the factors that are associated with their perceptions. The researchers used mixed methods for the rationale of optimizing participant enrichment, instrument fidelity and significance enhancement. Findings included a more complete test and the identification of prevalent characteristics, themes and metha-themes for faculty training.
What were the limitations of the two studies? Subjects in both studies were students of only one university. The results are, therefore, limited in their applicability to other institutions. For this reason, similar research studies should be repeated for students in other universities to determine whether their perceptions of effective teaching are similar to those obtained in those two studies. None of these studies found any relationship between GPA and students' perceptions of teaching characteristics. Further research studies should be carried out to determine if there is any relationship between both variables.
How would you have changed the two studies to improve the quality and usefulness of the research? The two studies illustrated how to use a multistage mixed-methods analysis to assess the validity of the teaching evaluation forms. Future research studies should be carried out using a multistage mixed-methods analysis and involve instructors as subjects to determine their perceptions of valued teaching characteristics. Conducting a study using both students and instructors in an educational institution as subjects would improve validity of results. The results obtained for each group can then be compared to determine whether any congruency or discrepancy is observable between students' and instructors' perceptions of effective teaching.
How would you incorporate the findings of the two studies into your institution? We should promote the highest academic standards in our teaching, our scholarship, and the connections between them. Specifically, I should be able to apply the characteristics of teaching that emerged from those studies. I would attempt to do similar research in my home institution. In Venezuela the current TEFs forms do not represent all characteristics that students consider to reflect effective college teaching. Findings regarding the characteristics of effective teaching can be inputs for faculty training. We should provide teaching support and conduct training for faculty, teaching assistants and librarians.
Article Review # 4 by Nelson Dordelly-Rosales
Can Teacher Education Make a Difference?
By Niels Brouwer and Fred Korthagen
References:
Brower, N., & Korthagen, F. (2005). Can Teacher Education Make a Difference?
American Educational Research Journal, 42 (1), 153-224
Crocker, R., & Dibbon, D. (2008). Teacher Education in Canada. Retrieved May 24, 2009
from www.saee.ca/pdfs/Teacher_Education_in_Canada.pdf
Problem/Issue and the Importance/Significance
Brower and Korthagen (2005) examined the graduates’ teaching competence originated from their pre-service programs, as observed in one university teacher education institution that aimed deliberately at integrating practice and theory. This longitudinal study of over a period of 4.5 years aimed to examine the impact of specific characteristics of the teacher education programs in the United States involving the integration of practical experience and theoretical study. The research model included eight variables: curriculum program conditions, non-curricular program conditions organization and content of activities during student teaching, organization and content of activities during college-based seminars, learning effects during pre-service programs, schools context factors during beginning teachers’ entry into the profession, beginning teachers’ experiences and options, learning effects during the first in-service years and personal background variables. The researchers demonstrated that occupational socialization in schools has a considerable influence on the development of graduates’ in-service competence (educating “innovative teachers”). They discussed specific ways in which pre-service teacher education can influence beginning teachers’ professional performance and competence development.
Research Questions
How does teaching competence develop over time? What are the relative influences of teacher education programs and occupational socialization in schools on the development of teaching competence? Which program characteristics are related to competence development? Does the program require beginning teachers to display, in real life situations, the competence that their pre-service programs aimed to foster?
Sample and sample selection process
The whole sample included “357 students, 128 cooperating teachers and 34 university supervisors from 24 graduate teacher education programs. On average, the beginning teachers in the sub sample had more teaching experience, ranging between 12 and 30 months after graduation, than those in the whole sub-sample, which ranged between 11 and 22 months after graduation”(Brower, & Korthagen, 2005, p.155). The reason is that the observations of and interviews with the beginning teachers in the sub-sample were based in part on their questionnaire responses. In order to ensure that sub- samples were representative as possible, the researchers applied several criteria, for example, the largest possible number of school subjects. From the total number of 31 university supervisors, those with the most professional experience were selected.
Data Collection Method/ Data Analysis Method
Quantitative survey data as well as in-depth qualitative data were collected using quantitative and qualitative methods: a longitudinal survey, interviews, observations, a written questionnaire (closed items), and classroom artifacts (program documents). The first step was to determine which activities were carried out in each program, in which order, and at which moments. Then all of the information was schematized. In the questionnaire, repeated measures were used to describe how the programs were implemented, to trace how the students experienced them, and to record their self-evaluations of their progress on the criterion variable. After graduating and finding work, the beginning teachers answered specific questions. After the programs had ended, the graduates completed one additional questionnaire (a few factual questions for those graduates who had not found work as beginning teachers). The University supervisors completed a questionnaire after completion of the entire program. Findings were reported from three epistemological perspectives: the ecological (collaboration and contextual conditions), the genetic (beginning teachers’ experiences) and the activity perspective (respondents’ actions in classroom and schools).
Reflection - Questions
Were the studies of value? Why? The study on teacher education in Canada by Crocker and Dibbon (2008) examined program structures, content emphasis and usefulness, perceptions of teaching knowledge and skill, the practicum experience, and the transition into the teaching profession. Among the important findings, the researchers found that (1) teacher education programs across Canada differ markedly in structure and duration, and (2) there were significant variations among the respondent groups’ perceptions of program content, emphasis, and quality. Relatively few (about 13%) graduates gave overall “excellent” ratings to their teacher education programs, while about half gave “good” ratings. To the researchers, those areas of content, knowledge and skill are highly valued in the field but are not being emphasized as strongly in teacher education programs as they might be. Brower and Korthagen (2005) in the United States analyzed the structure of teacher education programs. They found that those programs may be counterproductive to student teacher learning, and consequently, teacher educators may not display the best examples of good teaching. They also found that during and immediately after their pre-service programs, teachers experience a distinct attitude shift that entails an adjustment to teaching practices existing in schools. The authors showed that “integrative” theory-practice approaches in teacher education, in which student teachers’ practical experiences are closely linked to theoretical input, strengthen graduates’ innovative teaching competence.
What were the strengths of the two studies? Both studies are good longitudinal research that highlight the importance of integrating theory and practice in pre-service teacher education programs, and support the need for educating innovative teachers. Important suggestions for the design of teacher education programs and the conduct of teacher education research are drawn from both studies, for example, finding better ways to support and mentor novice teachers, developing stronger models of collaboration between the teachers and the institutions they serve, and developing a common vision for teacher education which articulates core content and competencies. Teacher education research should take a more longitudinal comparative approach.
What were the limitations of the two studies? Though large scale, longitudinal surveys may offer some advantages in terms of reducing validity threats, the literature suggests that researchers should prepare to deal with problems related with the longevity of longitudinal surveys. Some of the limitations were: resource restrictions, sample size, the absence of comparative information from other similar studies.
How would you have changed the two studies to improve the quality and usefulness of the research? I would take into account suggestions provided by the researchers: (1) refining the selection of respondents and measurement of criterion variables, (2) intensifying qualitative data collection during pre-service programs and carrying out repeated measurements and observations at increasing numbers of standardized moments after graduation, (3) developing a drop out study that can produce clues about differences between graduates who did and did not seek and find work as teachers that were associated with variables other than gender, number of applications, or progress during the pre-service program.
How would you incorporate the findings of the two studies into your classroom? Repeated cross-sectional studies or more longitudinal studies would be of great value in examining trends in teacher education. I would like to be engaged in longitudinal research, particularly the cohort study. We should focus into the ways in which prospective teachers learn from practice and develop competence and positive attitudes. The goal should be to equip teachers for entry into the teaching profession encompassing problem-based learning, authentic contexts and materials.
Reflective Summary of 4 articles
In the first article, Hadjionannou (2007) argued that the best way to understand the educational phenomenon is to view it in its context. To that end, she used different qualitative approaches such as recorded class sessions, interviews, and field notes. As a result she found important elements that appeared to be related to the students’ involvement in dialogic discussions and the social relationships in the classroom. She provided descriptions of classroom environment that supports authentic discussions. In the second article, Zimmerman (2008) used qualitative approaches such as portfolio assessments, direct participant observation and survey questionnaires. The focus was on the development of online measures of self-regulatory learning (SRL) processes and motivational feelings using innovative methods such as computer traces (gStudy software), think-aloud protocols, diaries of studying, direct observation, and microanalysis measures. This study adopted an inductive approach to its reasoning; observations were made from data collected through survey questionnaires, and then sought to work towards a theoretical integration of what it had found. Therefore, the study moved from the data to a theory and vice versa. The focus was on the uniqueness of the students in the self-regulation training group. The results revealed that students in the training group reported the greater increase in time management skill and self reflection on their learning than those in the control group.
In sum, both qualitative studies tended to be oriented toward individuals and case studies. They allowed for a richer analysis of subjects and for information to be gathered that would otherwise be entirely missed by a quantitative approach. The qualitative research focused on collecting, analyzing, and interpreting data by observing what people did and said, involving a continual interplay between theory and analysis. In analyzing qualitative data, the researchers discovered patterns such as changes over time or possible causal links between variables. The findings were a personal construction of how researchers viewed events and their job was to persuade us that their interpretation was valid. From the phenomenological point of view, the authors held that the subjects’ perceptions define reality.
In the last articles the authors applied quantitative research methods. Onwuegbuzie et al. (2007) developed a validity study of a teaching evaluation form (TEF). The researchers assessed 912 College students’ perceptions through a survey questionnaire. As a result, the researchers identified a list of characteristics that students considered descriptors of effective college teaching, three of which were not represented in the TEF. The researchers were able to develop a new and more complete form called the CARE RESPECTED Model of Teaching Evaluation (CRMTE). Brouwer and Korthagen (2005) developed a longitudinal study of a period of 4.5 years using questionnaires, interviews, observations and analysis of classroom artifacts to find out if occupational socialization in schools has a considerable influence on the development of graduate teachers’ in-service competence. The researchers quantified the variables of interest and examined the relationships between the variables mathematically through statistical analysis. The researchers showed that “integrative” theory-practice approaches in teacher education strengthen graduates’ innovative teaching competence. Quantitative research methods in both studies, simply put, were about numbers, objective hard data, quantitative and statistically valid results. Tools were used to minimize any bias in collecting information. The studies involved gathering data that is absolute such as numerical data, testing hypotheses, promoting its supposed neutrality.
In conclusion, researchers showed how to work collaboratively across qualitative and quantitative research paradigms. Mixed research rests on rich and varied approaches, which come from multiple disciplines to address different research topics.
Bringing the background to the foreground: what do classroom environment that support authentic discussions look like?
References:
Hadjioannou, X. (2007). Bringing the Background to the Foreground: What do Classroom
Environment that Support Authentic Discussions Look Like? American Educational
Research Journal; 44 (2), 370-399
Mapiasse, S. (2007). Influence of the democratic climate of classrooms on student civic learning in North Sulawesi, Indonesia [Electronic version]. International Education Journal, 8 (2), 393-407.
Overview
Hadjionannou (2007) focused on authentic or dialogic discussions in the classroom. Authentic discussions are a classroom-based speech genre in which participants commonly explore issues of interest by articulating ideas and opinions. A case study research is done to shed light on “authentic discussions” using different qualitative approaches such as recorded class sessions, interviews, and field notes. The researcher identified seven elements that appeared to be related to the student’s involvement in classroom activities and to the social relationships among community members. Those elements were physical environment, curricular demands and enacted curriculum, teacher beliefs, student beliefs about discussions, relationships among members, classroom procedures, and norms of classroom participation.
Problem/Issue and the Importance/Significance
Hadjionannou (2007) aimed to answer the question what do classroom environments that support authentic discussions look like? The study examines the features of the environment of a fifth grade classroom community. The author reported part of a wider qualitative research that sought (a) to examine the issue of interpersonal relationships within the classroom, specifically to analyze the texture of talk in the authentic discussions of the community under study, (b) to explore participant perspectives, and (c) to evaluate the classroom environment.
Research Question
What are the features of the classroom environment of this discourse community that frequently used authentic discussions?
Sample and sample selection process
The community under study was a fifth-grade class of 24 students and their teacher. The fifth-grade classroom community under study was part of Grassroots Elementary School (pseudonym), a quintessentially middle-class school in a midsize town in Florida.
Data Collection Method/ Data Analysis Method
Data collection included observation, participant interviews, and audio and video recordings of class sessions during a five month period on an almost daily basis. The process of identifying the major elements of the classroom environment was a generative one, and it began with the initial coding of the field notes and the interview transcripts. Each individual was interviewed four times, using a flexible interview protocol: the author audio and video-recorded four book talk sessions, which were transcribed verbatim and analyzed through discourse-analysis. The goal was to use a database with highly contextualized descriptors to systematically illustrate the content of the data. The findings of the discourse analysis were used primarily for describing the texture of talk in authentic discussions, but also capturing the elements of the classroom environment in action.
Trustworthiness/Validity Considerations
In addressing credibility, Hadjionannou (2007) presented a detailed picture of the phenomenon under scrutiny. The investigator provided sufficient detail of the context of the field-work, identified the elements that seemed to shape the environment of the classroom community under study, and described “how those elements functioned as repeatable threads woven to create the fabric of the classroom’s social life.” (p. 374). The researcher suggested that reproducing the environment described in this study in another classroom would be impossible because “the environments in communities are in constant flux, and they are shaped by the personalities and the agendas of community members as well as by the unique circumstances of each community” (p.396). However, through dialogic discussions in the classroom and cultivating amiable relations between the students, teachers can provide opportunities for student self-expression, lively interactions, and substantive collaboration in any classroom.
Ethical Issues
Confidentiality and anonymity was guaranteed to participants of the study. In the ethics literature, confidentiality is commonly viewed as akin to the principle of privacy. In this study the researcher used a pseudonym to identify the institution object of study or elementary school.
Reflection - Questions
Were the studies of value? Why or why not? Mappiasse (2007) examined the influence of the democratic climate of classrooms on student civic learning in North Sulawesi, Indonesia, and analyzed seven dimensions that support democratic climate in the classroom: active participation, avoidance of textbook dominated instruction, reflective thinking, student decision-making and problem-solving choices, controversial issues, recognition of human dignity, and relevance. Hadjionannou (2007) explored the environment of the fifth-grade classroom community in Florida and analyzed the elements that support authentic discussions. Those elements were physical environment, curricular demands and enacted curriculum, teacher beliefs, student beliefs about discussions, relationships among members, classroom procedures, and norms of classroom participation. Both studies identified the elements that seem to shape the environment of the classroom and described how those elements functioned as repeatable threads woven to create the fabric of authentic discussions and a democratic climate in the classroom’s social life. The two studies provide insights of great value for teaching and learning.
What were the strengths of the two studies? The results indicated that the democratic climate and authentic discussions have significant effects on student engagement, student knowledge and interpretation skill. Mappiasse (2007) centered in the advantages of democratic environment in the classroom; Hadjionannou (2007 emphasized in the importance of interpersonal and social interaction among students and teachers. The classroom environment is extremely important to effective teaching and learning. These studies described in great detail different indicators of a good classroom social environment; they also are good examples of qualitative research.
What were the limitations of the two studies? Subjects in each study were students of only one institution. Therefore, the results were limited in their applicability to other institutions. Similar research studies should be repeated in other institutions and in different subjects to determine whether those aspects of the classroom environment that appeared essential to effective teaching are similar to those obtained in these studies.
How would you have changed the two studies to improve the quality of the research? For the first study, I would enlarge the sample size and I would add a questionnaire for data collection. For the second study, I would add participant interviews and audio and video recording of class sessions. In addition, for the first study, it would also be necessary to explain instrument validation and the reliability of items as the second study did.
How would you incorporate the findings of the two studies into your classroom? I would like to develop a similar qualitative research study selecting a convenience sample of schools in Venezuela. In research and teaching, I would incorporate the democratic environment using meaningful classroom activities. I would work toward knowing my students and use this knowledge to create positive, trusting, and respectful relationships with them.
It is important to make students engage in authentic dialogue or discussion and learning activities, especially in civic education classrooms that involve law and education. We should provide the students with opportunities to obtain deeper understanding of the civic values and enable them to implement democratic values critically and responsibly in their social interactions; that is, to engage individuals and groups in developing a clear statement of belief about what strong democracy would look like.
Article Review # 2 by Nelson Dordelly-Rosales
Investigating Self-Regulation and Motivation:
Historical Background, Methodological Developments, and Future Prospects
References:
Zimmerman, B. (2008). Investigating Self-regulation and Motivation:
Historical Background, Methodological Developments, and Future Prospects
American Educational Research Journal; 45 (1), 166-183
Zimmerman, B. (2002). Becoming a Self-regulated Learner: an Overview.
Retrieved June 2, 2009 from
http://findarticles.com/p/articles/mi_m0NQM/is_2_41/ai_90190493/
Overview
Zimmerman (2008) assessed students’ self regulated learning (SRL) online. The focus was on processes and motivational feelings or beliefs regarding learning in authentic contexts using computer ‘traces’ (or gStudy software), think-aloud protocols, diaries of studying, direct observation, and microanalyses measures. The results revealed that students in high SRL online classes were more engaged in their writing than students in low-SRL classes, and that students in the training group reported significantly greater increases in time management skill and self-reflection on their learning than those in the control group. Students in the self-regulation training condition also displayed increases in several measures of motivation. Their willingness to exert effort, their task interest, their learning-goal orientation, and their perceptions of self-efficacy all increased after training and their feelings of helplessness declined significantly. Students in “the self-regulation training group displayed significantly greater gains in math achievement than students in the control group” (p.175).
Problem/Issue and the Importance/Significance
The study defined the issue of innovative environment and how it impacts the students’ use of self regulatory processes during the course of learning. The study is significant because it enlightens the motivation and self-regulation process. One of the lessons for instructors and learners was that self-regulation strategy measure can predict students’ academic grades and their teachers’ ratings of their proactive efforts to learn in class.
Research Questions
The first question concerned the innovative software program (called “gStudy environment”), that is, how traces measure SRL as compared to self-reported measures. The researcher assessed changes in self-regulation during learning. The second question dealt with students’ levels of SRL in personally managed contexts, such as at home or in the library. The idea was to find out if students’ levels of SRL were linked to improvements in the students’ overall academic achievement. The third question involved whether teachers can modify their classrooms to foster increases in self-regulated learning. The fourth question concerned the role of students’ motivational feelings and beliefs in initiating and sustaining changes in their SRL.
Sample and sample selection process
Teachers were randomly assigned to either an experimental or a control group. Nine teachers were trained to convey the underlying cyclical model and to develop homework exercises, quizzes, and a final examination in arithmetic skill. The control group of eight teachers gave the same homework assignments and tests but received no self-regulation training. The students in both experimental conditions kept diary accounts of SRL events.
Data Collection/Analysis Methods
The author used innovative qualitative as well as quantitative methods that included teacher and student data collection and different analysis methods (observation forms, portfolio assessments, interviews, and questionnaires) to measure SRL. Teachers in the SRL training condition gave students a copy of the cyclical model of self-regulation along with a picture of a ‘learning expert’, who recommended self-regulatory practices that the teacher modeled for them. Students were given daily feedback and were encouraged to set challenging goals and choose a specific strategy for themselves. Students in the experimental group were given points on the basis of their homework answers. The students were assessed in their interests, attitudes, and self-related cognition before and after a five week training program. The students’ calibration of the accuracy of their achievement was significantly correlated with their actual posttest score.
Instructional and ethical issues
Technology is a tool that can change the nature of ESL. However, the role of the teacher and instructor is critical in providing guidance and support to self-regulated academic learning.
Reflection - Questions
Were the studies of value? Why or why not? In previous study, Becoming a Self-Regulated Learner, Zimmerman (2002) showed that self-regulation learning (ESL) is not a mental ability or an academic performance skill; rather it is the self-directive process by which learners transform their mental abilities into academic skills. The author identified how a student’s use of specific learning processes, level of self-awareness, and motivational beliefs combine to produce self-regulated learners. In the most recent study, Zimmerman (2007) showed that when compared to control students, SRL trained students displayed significant increases in homework effectiveness, time management skills, a broad array of self-reflection measures, and math performance skill (in fact, the self-regulation training group passed an entrance exam for admittance to a higher level school, which was an increase of 50% compared to past cohort). Both studies were of value
What were the strengths of the two studies? Zimmerman (2000) showed that self-regulated, independent learners take responsibility for what they learn and he analyzed how far they can go with this knowledge. In the second study Zimmerman (2007) showed that (1) “gStudy environment” can provide students with many more ways to self-regulate their learning than provided by traditional instructional software, (2) the “think-aloud methodology” is an effective way to assess students’ self-regulatory processes online, (3) training in self-regulation learning and time-management skills can be implemented by teachers as part of their classroom assignments and strategic planning, and (4) the “micro-analytic methodology” (used to improve athletic skills) for assessing SRL processes and sources of motivation (goal setting and strategic planning, self-reflection, predictive sources of motivation) improve self-regulation. The results showed that the experimental group reported significantly greater increases in time management skill and self-reflection on their learning, homework effectiveness, time management skills, a broad array of self-reflection measures, and math performance skill, than the control group.
What were the limitations of the two studies? In general, there are still raising new questions for future research: more research is needed regarding the accuracy of students’ reports of using self-regulatory processes. In trying to answer the global question: How do students become masters of their own learning processes? Zimmerman (2007 says that “there was not a standardized measure of students’ writing achievement, and this limitation precluded determination of the effects of students’ SRL on their writing competence” (176). Students in the high and low-SRL classes did not display significant differences in measures of motivation (beliefs, values etc), which is attributed to the ineffectiveness of the measures.
How would you have changed the two studies to improve the quality and usefulness of the research? I would follow Zimmerman’s research approach (2007) and would take his suggestions. There is a need to (a) extend the use of the four ways to assess the effectiveness of academic interventions designed to motivate recalcitrant students to engage on SRL, (b) extend a micro analytic methodology to learning academic tasks over longer periods of time when students’ motivation is expected to wane, (c) apply additional measures of motivation and feelings, such as anxiety and goal orientation, (d) extend the “think-aloud methodology” to see if planning and motivation will emerge as significant predictors of students’ mental models study.
How would I incorporate the findings of the two studies into your classroom? I would provide innovative environment (gStudy software, think-aloud protocols, diaries of studying, direct observation, microanalyses measures) so that students become masters of their own learning process: SRL a “proactive processes that students use to acquire academic skill, such as setting goals, selecting and deploying strategies, and self-monitoring one’s effectiveness” (166).
Article Review # 3 by Nelson Dordelly-Rosales
Students’ Perceptions of Characteristics of Effective College Teachers: A Validity Study of a Teaching Evaluation Form Using a Mixed-Methods Analysis
By Anthony J. Onwuegbuzie; Ann E Witcher; Kathleen M T Collin; Janet D Filer; e al
Reference(s):
Onwuegbuzie, A.J., Witcher, A. E., Collin, K.M., Filer, J.D., et al. (2007). Students’ Perceptions of Characteristics of Effective College Teachers: Validity Study of a Teaching Evaluation Form Using a Mixed-Methods Analysis. American Educational Research Journal; 44 (1), 113-160
Suwandee, A. (1995). Students' perceptions of university instructors' effective teaching characteristics [Electronic version]. Studies in Language and Language Teaching Journal,5, 6-22
Overview
Onwuegbuzie, et al. (2007) assessed the content-related validity and construct-related validity of the Teaching Evaluation Form (TEF). Using sequential-mixed methods analysis lead the researchers to the development of a more complete form, the CARE-RESPECTED Model of Teaching Evaluation (CRMTE), which includes three of the least represented themes of the TEF: student-centered, enthusiast, and ethical. The words consistency, fair evaluator, and respectful describe the item ethical. The CRMTE is a useful data-driven test that will benefit all stakeholders –college administrators, teachers, and, above all, students.
Problem/Issue and the Importance/Significance
The problem was students’ perceptions of characteristics of effective college teachers: a validity study of a teaching evaluation form using a mixed-methods analysis. To Onwuegbuzie, et al. (2007) “the TEFs (a) are developed atheoretically and (b) omit what students deem to be the most important characteristics of effective college teachers” (p.151). In an era in which information gleaned from TEFs is used to make decisions about faculty, this potential threat to validity is disturbing and warrants further research.
Research Questions
What themes reflect effective college teachers as identified by students? What students’ attributes affect perceptions of effective college teachers? What is the content-related validity and construct-related validity pertaining to a TEF?
Sample and sample selection process
Participants were 912 undergraduate and graduate students (out of 8,555 students enrolled) from various academic majors enrolled at a public university in a mid-southern state of the United States. The sample represented 10.7% of the total population and reflected 68 degree programs offered by the university.
Data Collection and Analysis Methods
This study used a multistage mixed-methods analysis to collect and to assess the content-related validity and construct-related validity of TEF. The researchers approached instructors/professors before the study began to solicit participation of their students and thus maximize participation rate. The researchers collected qualitative data (e.g., respondents’ perceptions of the questionnaire), and quantitative data (e.g., response rate information, missing data information) before the study began (plot phase) and used member checking techniques to assess the appropriateness of the questionnaire and the adequacy of the time allotted to complete it, after the major data collection phases. A sequential mixed-methods analysis (SMMA) was undertaken to analyze students’ responses. The process included: data reduction, data display, data transformation, data correlation, data consolidation, data comparison and data integration. This analysis, incorporating both inductive and deductive reasoning, employed qualitative and quantitative data-analytic techniques.
Limitations/Delimitations/Assumptions
Because the sample represented students at a single university whose perspectives about effective teachers were gathered at a single point in time, the extent to which the present findings are generalizable to students from other institutions is not clear.
Trustworthiness/Validity Considerations
The focus of the study was on population validity, ecological validity, temporal validity and adequate external validity. The findings cast some serious doubt on the content-related validity and construct-related validity of TEF scores (e.g., endorsement of most themes varied by student attribute: gender, age). The validity of responses might have been affected by the fact that “the students’ perceptions were assessed via a relatively brief-self-report instrument” (p.144).
Reflection - Questions
Were the studies of value? Why or why not? Both studies were of great value. In the study by Suwandee (1995) data was obtained from 505 university students in the Faculty of Science. The results indicated that students considered an effective teacher as one who has a good knowledge of his/her subject and applies pedagogical skills, making difficult topics easy to understand and explains clearly; his/her personality is generous, willing to help students in and out of the classroom; and his/her research-teaching background shows a well-prepared instructor for class. Onwuegbuzie and others (2007) identified characteristics that students considered effective college teaching comprising four metha-themes, which were the following: communicator, advocate, responsible, and empowering, and nine themes comprising the following descriptors: responsive, professional, expert, connector, transmitter, director, enthusiast, student centered, and ethical. The researchers developed the CARE-RESPECTED Model of Teaching Evaluation (CRMTE) that emerged from the study, which included the last three descriptors, which were not represented in the TEF. These two studies have added to the current yet scant body of literature regarding the score validity of TEFs.
What were the strengths of the two studies? The studies examined students’ perceptions of characteristics of effective college teachers and the factors that are associated with their perceptions. The researchers used mixed methods for the rationale of optimizing participant enrichment, instrument fidelity and significance enhancement. Findings included a more complete test and the identification of prevalent characteristics, themes and metha-themes for faculty training.
What were the limitations of the two studies? Subjects in both studies were students of only one university. The results are, therefore, limited in their applicability to other institutions. For this reason, similar research studies should be repeated for students in other universities to determine whether their perceptions of effective teaching are similar to those obtained in those two studies. None of these studies found any relationship between GPA and students' perceptions of teaching characteristics. Further research studies should be carried out to determine if there is any relationship between both variables.
How would you have changed the two studies to improve the quality and usefulness of the research? The two studies illustrated how to use a multistage mixed-methods analysis to assess the validity of the teaching evaluation forms. Future research studies should be carried out using a multistage mixed-methods analysis and involve instructors as subjects to determine their perceptions of valued teaching characteristics. Conducting a study using both students and instructors in an educational institution as subjects would improve validity of results. The results obtained for each group can then be compared to determine whether any congruency or discrepancy is observable between students' and instructors' perceptions of effective teaching.
How would you incorporate the findings of the two studies into your institution? We should promote the highest academic standards in our teaching, our scholarship, and the connections between them. Specifically, I should be able to apply the characteristics of teaching that emerged from those studies. I would attempt to do similar research in my home institution. In Venezuela the current TEFs forms do not represent all characteristics that students consider to reflect effective college teaching. Findings regarding the characteristics of effective teaching can be inputs for faculty training. We should provide teaching support and conduct training for faculty, teaching assistants and librarians.
Article Review # 4 by Nelson Dordelly-Rosales
Can Teacher Education Make a Difference?
By Niels Brouwer and Fred Korthagen
References:
Brower, N., & Korthagen, F. (2005). Can Teacher Education Make a Difference?
American Educational Research Journal, 42 (1), 153-224
Crocker, R., & Dibbon, D. (2008). Teacher Education in Canada. Retrieved May 24, 2009
from www.saee.ca/pdfs/Teacher_Education_in_Canada.pdf
Problem/Issue and the Importance/Significance
Brower and Korthagen (2005) examined the graduates’ teaching competence originated from their pre-service programs, as observed in one university teacher education institution that aimed deliberately at integrating practice and theory. This longitudinal study of over a period of 4.5 years aimed to examine the impact of specific characteristics of the teacher education programs in the United States involving the integration of practical experience and theoretical study. The research model included eight variables: curriculum program conditions, non-curricular program conditions organization and content of activities during student teaching, organization and content of activities during college-based seminars, learning effects during pre-service programs, schools context factors during beginning teachers’ entry into the profession, beginning teachers’ experiences and options, learning effects during the first in-service years and personal background variables. The researchers demonstrated that occupational socialization in schools has a considerable influence on the development of graduates’ in-service competence (educating “innovative teachers”). They discussed specific ways in which pre-service teacher education can influence beginning teachers’ professional performance and competence development.
Research Questions
How does teaching competence develop over time? What are the relative influences of teacher education programs and occupational socialization in schools on the development of teaching competence? Which program characteristics are related to competence development? Does the program require beginning teachers to display, in real life situations, the competence that their pre-service programs aimed to foster?
Sample and sample selection process
The whole sample included “357 students, 128 cooperating teachers and 34 university supervisors from 24 graduate teacher education programs. On average, the beginning teachers in the sub sample had more teaching experience, ranging between 12 and 30 months after graduation, than those in the whole sub-sample, which ranged between 11 and 22 months after graduation”(Brower, & Korthagen, 2005, p.155). The reason is that the observations of and interviews with the beginning teachers in the sub-sample were based in part on their questionnaire responses. In order to ensure that sub- samples were representative as possible, the researchers applied several criteria, for example, the largest possible number of school subjects. From the total number of 31 university supervisors, those with the most professional experience were selected.
Data Collection Method/ Data Analysis Method
Quantitative survey data as well as in-depth qualitative data were collected using quantitative and qualitative methods: a longitudinal survey, interviews, observations, a written questionnaire (closed items), and classroom artifacts (program documents). The first step was to determine which activities were carried out in each program, in which order, and at which moments. Then all of the information was schematized. In the questionnaire, repeated measures were used to describe how the programs were implemented, to trace how the students experienced them, and to record their self-evaluations of their progress on the criterion variable. After graduating and finding work, the beginning teachers answered specific questions. After the programs had ended, the graduates completed one additional questionnaire (a few factual questions for those graduates who had not found work as beginning teachers). The University supervisors completed a questionnaire after completion of the entire program. Findings were reported from three epistemological perspectives: the ecological (collaboration and contextual conditions), the genetic (beginning teachers’ experiences) and the activity perspective (respondents’ actions in classroom and schools).
Reflection - Questions
Were the studies of value? Why? The study on teacher education in Canada by Crocker and Dibbon (2008) examined program structures, content emphasis and usefulness, perceptions of teaching knowledge and skill, the practicum experience, and the transition into the teaching profession. Among the important findings, the researchers found that (1) teacher education programs across Canada differ markedly in structure and duration, and (2) there were significant variations among the respondent groups’ perceptions of program content, emphasis, and quality. Relatively few (about 13%) graduates gave overall “excellent” ratings to their teacher education programs, while about half gave “good” ratings. To the researchers, those areas of content, knowledge and skill are highly valued in the field but are not being emphasized as strongly in teacher education programs as they might be. Brower and Korthagen (2005) in the United States analyzed the structure of teacher education programs. They found that those programs may be counterproductive to student teacher learning, and consequently, teacher educators may not display the best examples of good teaching. They also found that during and immediately after their pre-service programs, teachers experience a distinct attitude shift that entails an adjustment to teaching practices existing in schools. The authors showed that “integrative” theory-practice approaches in teacher education, in which student teachers’ practical experiences are closely linked to theoretical input, strengthen graduates’ innovative teaching competence.
What were the strengths of the two studies? Both studies are good longitudinal research that highlight the importance of integrating theory and practice in pre-service teacher education programs, and support the need for educating innovative teachers. Important suggestions for the design of teacher education programs and the conduct of teacher education research are drawn from both studies, for example, finding better ways to support and mentor novice teachers, developing stronger models of collaboration between the teachers and the institutions they serve, and developing a common vision for teacher education which articulates core content and competencies. Teacher education research should take a more longitudinal comparative approach.
What were the limitations of the two studies? Though large scale, longitudinal surveys may offer some advantages in terms of reducing validity threats, the literature suggests that researchers should prepare to deal with problems related with the longevity of longitudinal surveys. Some of the limitations were: resource restrictions, sample size, the absence of comparative information from other similar studies.
How would you have changed the two studies to improve the quality and usefulness of the research? I would take into account suggestions provided by the researchers: (1) refining the selection of respondents and measurement of criterion variables, (2) intensifying qualitative data collection during pre-service programs and carrying out repeated measurements and observations at increasing numbers of standardized moments after graduation, (3) developing a drop out study that can produce clues about differences between graduates who did and did not seek and find work as teachers that were associated with variables other than gender, number of applications, or progress during the pre-service program.
How would you incorporate the findings of the two studies into your classroom? Repeated cross-sectional studies or more longitudinal studies would be of great value in examining trends in teacher education. I would like to be engaged in longitudinal research, particularly the cohort study. We should focus into the ways in which prospective teachers learn from practice and develop competence and positive attitudes. The goal should be to equip teachers for entry into the teaching profession encompassing problem-based learning, authentic contexts and materials.
Reflective Summary of 4 articles
In the first article, Hadjionannou (2007) argued that the best way to understand the educational phenomenon is to view it in its context. To that end, she used different qualitative approaches such as recorded class sessions, interviews, and field notes. As a result she found important elements that appeared to be related to the students’ involvement in dialogic discussions and the social relationships in the classroom. She provided descriptions of classroom environment that supports authentic discussions. In the second article, Zimmerman (2008) used qualitative approaches such as portfolio assessments, direct participant observation and survey questionnaires. The focus was on the development of online measures of self-regulatory learning (SRL) processes and motivational feelings using innovative methods such as computer traces (gStudy software), think-aloud protocols, diaries of studying, direct observation, and microanalysis measures. This study adopted an inductive approach to its reasoning; observations were made from data collected through survey questionnaires, and then sought to work towards a theoretical integration of what it had found. Therefore, the study moved from the data to a theory and vice versa. The focus was on the uniqueness of the students in the self-regulation training group. The results revealed that students in the training group reported the greater increase in time management skill and self reflection on their learning than those in the control group.
In sum, both qualitative studies tended to be oriented toward individuals and case studies. They allowed for a richer analysis of subjects and for information to be gathered that would otherwise be entirely missed by a quantitative approach. The qualitative research focused on collecting, analyzing, and interpreting data by observing what people did and said, involving a continual interplay between theory and analysis. In analyzing qualitative data, the researchers discovered patterns such as changes over time or possible causal links between variables. The findings were a personal construction of how researchers viewed events and their job was to persuade us that their interpretation was valid. From the phenomenological point of view, the authors held that the subjects’ perceptions define reality.
In the last articles the authors applied quantitative research methods. Onwuegbuzie et al. (2007) developed a validity study of a teaching evaluation form (TEF). The researchers assessed 912 College students’ perceptions through a survey questionnaire. As a result, the researchers identified a list of characteristics that students considered descriptors of effective college teaching, three of which were not represented in the TEF. The researchers were able to develop a new and more complete form called the CARE RESPECTED Model of Teaching Evaluation (CRMTE). Brouwer and Korthagen (2005) developed a longitudinal study of a period of 4.5 years using questionnaires, interviews, observations and analysis of classroom artifacts to find out if occupational socialization in schools has a considerable influence on the development of graduate teachers’ in-service competence. The researchers quantified the variables of interest and examined the relationships between the variables mathematically through statistical analysis. The researchers showed that “integrative” theory-practice approaches in teacher education strengthen graduates’ innovative teaching competence. Quantitative research methods in both studies, simply put, were about numbers, objective hard data, quantitative and statistically valid results. Tools were used to minimize any bias in collecting information. The studies involved gathering data that is absolute such as numerical data, testing hypotheses, promoting its supposed neutrality.
In conclusion, researchers showed how to work collaboratively across qualitative and quantitative research paradigms. Mixed research rests on rich and varied approaches, which come from multiple disciplines to address different research topics.
Monday, 15 June 2009
Comparison between Historical Research and Evaluation Research
Reference:
Borg, W.R., and Gall, M.D., (1999). Educational Research: An Introduction (6th ed.). Toronto, ON: Allyn & Bacon.
Chapters 16 and 17
Summary by Nelson Dordelly-Rosales, June 20th, 2009
Historical Research: What is it?
• Historical research is the systematic search for facts relating to questions about the past, and the interpretation of these facts. By studying the past, the historian hopes to achieve a better understanding of present institutions, practices and issues in education.
• There is no single, definable method of historical inquiry (Edson, 1988)
What does historical research mean from the qualitative and quantitative perspectives?
• From the qualitative perspective, historical research means historical inquiry. It proposes to learn from past discoveries and mistakes, and provides a moral framework for understanding the present and predicting future trends.
• From the quantitative perspective, historical research is the systematic collection and objective evaluation of data related to past occurrences in order to test hypotheses concerning causes, effects, or trends of these events that may help to explain present events and anticipate future events.
How to conduct a historical research?
– Definition of a problem: topic (s) or questions to be investigated
– Formulation of questions to be answered, hypotheses to be tested or topics to be investigated.
– Systematic collection and analysis of historical data
– Summary and evaluation of data and the historical sources
– Interpretation: present the pertinent facts within an interpretive framework
– Production of a synthesis of findings or confirmation/disconfirmation of hypotheses or questions (Borg & Gall, 1999, p. 811-831)
What are the types of historical sources?
• Preliminary sources: published aids for identifying the secondary source literature in history. An important requirement is to list key descriptors for one’s problem or topic, e.g., bibliographies and reference works.
• Primary: those documents in which the individual describing the event was present when it occurred, e.g., diaries, manuscripts.
• Secondary: documents in which the individual describing the event was not present but obtained a description from someone else, who may or may not have directly observed the event, e.g., historian’s interpretations (Borg & Gall, 1999, p.815-817).
How to record information from historical sources?
• Examining availability and deciding what information to record from:
- Documents: diaries, memoirs, legal records, court testimony, newspapers, periodicals, business records, notebooks, yearbooks, diplomas, committee reports, memos, institutional files, textbooks, tests.
- Quantitative records: census records, school budgets, school attendance records, test
scores.
- Oral history: i.e., records and interviews.
- Relics: an object whose physical or visual properties provide information about the
past.
• Summarizing quantitative data (Borg & Gall, 1999, p. 818-819)
How to evaluate the worth and meaning of historical sources?
• External criticism: evaluation of the nature of the source, e.g., Is it genuine? Is it the original copy? Who wrote it? Under what conditions?
• Internal criticism: the evaluation of the information contained in the source, e.g., is it probable that people would act in the way described by the author? Do the budget figures mentioned by the writer seem reasonable? (Borg & Gall, 1999, p. 821-823).
How to interpret historical research?
• Use of concepts to interpret historical information:
- Concepts are indispensable for organizing the phenomena that occurred in the past.
- Group together those persons, events, or objects that share a common set of attributes.
- Place limits on the interpretation of the past.
• Being aware of bias, values, and personal interests allow researchers to interpret or “reconstruct” certain aspects of past events, but not others. Also, it allows interpreting past events using concepts and perspectives that originated in more recent cases.
What is the role of the historical researcher?
• Historians cannot ‘prove’ that one event in the past caused another, but they can be aware of, and make explicit, the assumptions that underlie the act of ascribing causality to sequences of historical events (Borg & Gall, 1999, p. 831).
• Generalizing from historical evidence means looking for consistency across subjects or an individual in different circumstances (Borg & Gall, 1999, p. 834).
• Causal inference in historical research is the process of reaching the conclusion that one set of events brought about, directly or indirectly, a subsequent set of events (Borg & Gall, 1999, p. 836).
What is Evaluation Research?
• Educational evaluation: is the process of making judgments about the merit, value, or worth of educational programs (Borg & Gall, 1999, p. 781).
• Evaluation Research: is usually initiated by someone’s need for a decision to be made concerning policy, management, or political strategy. The purpose is to collect data that will facilitate decision-making (Borg & Gall, 1999, p. 782).
• Educational Research: is usually initiated by a hypothesis about the relationship between two or more variables. The research is conducted in order to reach a conclusion about the hypothesis - to accept or reject it (Borg & Gall, 1999, p. 783).
How to conduct an ‘Evaluation Study’?
• Clarifying reasons for doing the evaluation
• Identifying the stakeholders
• Deciding what is to be evaluated
- Program goals
- Resources and procedures
- Program management
- Identifying evaluation questions
- Developing an evaluation design and timeline
- Collecting and analyzing evaluation data
- Reporting the evaluation results (Borg & Gall, 1999, p.744-753).
What are the criteria of a good evaluation study?
• Utility: an evaluation has utility if it is informative, timely, and useful to the affected persons.
• Feasibility: the evaluation design is appropriate to the setting in which the study is to be conducted and that the design is cost-effective.
• Propriety: if the rights of persons affected by the evaluation are protected.
• Accuracy: extent to which an evaluation study has produced valid, reliable, and comprehensible information about the entity being evaluated (Borg & Gall, 1999, p.755).
What is involved in ‘quantitatively oriented evaluation’ models?
• Evaluation of the individual.
• Objectives-based evaluation for determining the merits of a curriculum or an educational program.
• Needs assessment.
• Formative and summative evaluation.
(Borg & Gall, 1999, 758-767).
Evaluation of the individual
• This type of research involves the assessment of students’ individual differences in intelligence and school achievement.
• It also involves evaluation of teachers, administrators, and other school personnel.
• Like assessment of students, personnel evaluation focuses on measurement of individual differences, and judgments are made by comparing the individual with a set of norms or criterion (Borg & Gall, 1999, p.759)
Objectives-based evaluation: Four Models
• Discrepancy evaluation between the objectives of a program and students’ actual achievement of the objectives (Provus, 1971).
• Cost-benefit evaluation to determine the relationship between the costs of a program and the objectives that it has achieved. Comparisons are made to determine which promotes the greatest benefits for each unit of resource expenditure (Levin, 1983).
• Behavioral objectives to measure the learner’s achievemen(Tyler,1960)
• Goal-free evaluation to discover the actual effects of the program in operation that may differ from the program developers’ stated goals (Scriven, 1973).
Needs assessment
• This type of research aims to determine a discrepancy between an existing set of conditions and a desired set of conditions.
• Educational needs can be assessed systematically using quantitative research methods.
• Personal values and standards are important determinants of needs, and they should be assessed to round out one’s understanding of needs among the groups being studied.
• Needs assessment data are usually reported as group trends (Borg & Gall, 1999, p. 763)
Formative and summative evaluation
• The function of formative evaluation is to collect data about educational products while they are still being developed. The evaluative data can be used by developers to design and modify the product (Borg & Gall, 1999).
• The summative function of evaluation occurs after the product has been fully developed. It is conducted to determine how worthwhile the final product is, especially in comparison with other competing products. Summative data are useful to educators who must make purchase or adoption decisions (Borg & Gall, 1999).
Evaluation to guide program management
• It includes context evaluation, input evaluation, process evaluation, and product evaluation (CIPP). The CIPP model shows how evaluation could contribute to the decision-making process in program management (Stufflebeam and others 1971).
• Context evaluation involves identification of problems and needs in a specific setting.
• Input evaluation concerns judgments about the resources and strategies needed to accomplish program goals and objectives.
• Process evaluation involves the collection of evaluative data once the program has been designed and put into operation.
• Product evaluation aims to determine the extent to which the goals of the program have been achieved.
What does a ‘qualitatively oriented evaluation’ model mean?
• The worth of an educational program or product depends heavily on the values and perspectives of those doing the judging.
• For example the three following models:
- Responsive evaluation (Stake, 1967)
- Adversary evaluation (positive and negative judgments about the program) (Wolf, 1975)
- Expertise-based evaluation (Eisner, 1979)
Responsive evaluation
• Focuses on the concerns, issues and values affecting the stakeholders or persons involved in the program (Stake, 1967)
• Egon, Guba & Yvonna (1989) identified four major phases that occur in evaluation:
- Initiation and organization: negotiation between the evaluator and the client.
- Identifying the concern’s issues and values of the stakeholders using questionnaires
and interviews.
- Collection of descriptive evaluation using observations, tests, interviews, etc
- Preparing reports of results and recommendations.
Adversary evaluation
• Adversary evaluation relates in certain respects to responsive evaluation (positive and negative judgments about the program) (Wolf, 1975). It uses a wide array of data.
• Four major stages:
- Generating a broad range of issues, the evaluation team surveys various groups involved in the program (users, managers, funding agencies, etc).
- Reducing the list of issues to a manageable number.
- Forming two opposite evaluation teams (the adversaries) and provides them an opportunity to prepare arguments in favor of or in opposition to the program on each issue.
- Conducting pre-hearing sessions and a formal hearing in which the adversarial teams present their arguments and evidence before the program’s decision makers (p.774).
Expertise-based evaluation
• Expertise-based evaluation or educational connoisseurship and criticism or judgment about the worth of a program made by experts (Eisner, 1979)
• One aspect of connoisseurship is the process of appreciating (in the sense of becoming aware of) the qualities of an educational program and their meaning. This expertise is similar to that of an art critic who has special appreciation of an art work because of intensive study of related art works and of art theory.
• The other aspect of the method is criticism, which is the process of describing and evaluating that which has been appreciated. The validity of educational criticism depends heavily on the expertise of the evaluator.
Differences between Historical and Evaluation Research
• Historical research aims to assess the worth and meaning of historical sources, documents, records, relics, oral history, etc. The search is for facts relating to questions about the past, the interpretation of these facts and its significance for the present.
• Evaluation research aims to assess the merit, value, or worth of educational programs and materials of any level of schooling. It facilitates decision-making concerning policy, management, or political strategy to improve educational matters.
Conclusion
• Each type of research addresses different types of questions, and each one is necessary for advancing the field of education. The decision to undertake one of these types of research will depend primarily on the interests of the study. However, both, historical and evaluation research draw to varying degrees on the qualitative and quantitative traditions of research.
• In quantitative evaluation research, objectives provide the criteria for judging the merits of the product, e.g., publication and cost, physical properties, content, instructional properties, etc. In qualitative research, the worth of an educational program or product depends heavily on the values and perspectives of researchers.
• In historical research the historian discovers objective data but also can interpret and critique, making personal observations on the worth & value of findings.
Borg, W.R., and Gall, M.D., (1999). Educational Research: An Introduction (6th ed.). Toronto, ON: Allyn & Bacon.
Chapters 16 and 17
Summary by Nelson Dordelly-Rosales, June 20th, 2009
Historical Research: What is it?
• Historical research is the systematic search for facts relating to questions about the past, and the interpretation of these facts. By studying the past, the historian hopes to achieve a better understanding of present institutions, practices and issues in education.
• There is no single, definable method of historical inquiry (Edson, 1988)
What does historical research mean from the qualitative and quantitative perspectives?
• From the qualitative perspective, historical research means historical inquiry. It proposes to learn from past discoveries and mistakes, and provides a moral framework for understanding the present and predicting future trends.
• From the quantitative perspective, historical research is the systematic collection and objective evaluation of data related to past occurrences in order to test hypotheses concerning causes, effects, or trends of these events that may help to explain present events and anticipate future events.
How to conduct a historical research?
– Definition of a problem: topic (s) or questions to be investigated
– Formulation of questions to be answered, hypotheses to be tested or topics to be investigated.
– Systematic collection and analysis of historical data
– Summary and evaluation of data and the historical sources
– Interpretation: present the pertinent facts within an interpretive framework
– Production of a synthesis of findings or confirmation/disconfirmation of hypotheses or questions (Borg & Gall, 1999, p. 811-831)
What are the types of historical sources?
• Preliminary sources: published aids for identifying the secondary source literature in history. An important requirement is to list key descriptors for one’s problem or topic, e.g., bibliographies and reference works.
• Primary: those documents in which the individual describing the event was present when it occurred, e.g., diaries, manuscripts.
• Secondary: documents in which the individual describing the event was not present but obtained a description from someone else, who may or may not have directly observed the event, e.g., historian’s interpretations (Borg & Gall, 1999, p.815-817).
How to record information from historical sources?
• Examining availability and deciding what information to record from:
- Documents: diaries, memoirs, legal records, court testimony, newspapers, periodicals, business records, notebooks, yearbooks, diplomas, committee reports, memos, institutional files, textbooks, tests.
- Quantitative records: census records, school budgets, school attendance records, test
scores.
- Oral history: i.e., records and interviews.
- Relics: an object whose physical or visual properties provide information about the
past.
• Summarizing quantitative data (Borg & Gall, 1999, p. 818-819)
How to evaluate the worth and meaning of historical sources?
• External criticism: evaluation of the nature of the source, e.g., Is it genuine? Is it the original copy? Who wrote it? Under what conditions?
• Internal criticism: the evaluation of the information contained in the source, e.g., is it probable that people would act in the way described by the author? Do the budget figures mentioned by the writer seem reasonable? (Borg & Gall, 1999, p. 821-823).
How to interpret historical research?
• Use of concepts to interpret historical information:
- Concepts are indispensable for organizing the phenomena that occurred in the past.
- Group together those persons, events, or objects that share a common set of attributes.
- Place limits on the interpretation of the past.
• Being aware of bias, values, and personal interests allow researchers to interpret or “reconstruct” certain aspects of past events, but not others. Also, it allows interpreting past events using concepts and perspectives that originated in more recent cases.
What is the role of the historical researcher?
• Historians cannot ‘prove’ that one event in the past caused another, but they can be aware of, and make explicit, the assumptions that underlie the act of ascribing causality to sequences of historical events (Borg & Gall, 1999, p. 831).
• Generalizing from historical evidence means looking for consistency across subjects or an individual in different circumstances (Borg & Gall, 1999, p. 834).
• Causal inference in historical research is the process of reaching the conclusion that one set of events brought about, directly or indirectly, a subsequent set of events (Borg & Gall, 1999, p. 836).
What is Evaluation Research?
• Educational evaluation: is the process of making judgments about the merit, value, or worth of educational programs (Borg & Gall, 1999, p. 781).
• Evaluation Research: is usually initiated by someone’s need for a decision to be made concerning policy, management, or political strategy. The purpose is to collect data that will facilitate decision-making (Borg & Gall, 1999, p. 782).
• Educational Research: is usually initiated by a hypothesis about the relationship between two or more variables. The research is conducted in order to reach a conclusion about the hypothesis - to accept or reject it (Borg & Gall, 1999, p. 783).
How to conduct an ‘Evaluation Study’?
• Clarifying reasons for doing the evaluation
• Identifying the stakeholders
• Deciding what is to be evaluated
- Program goals
- Resources and procedures
- Program management
- Identifying evaluation questions
- Developing an evaluation design and timeline
- Collecting and analyzing evaluation data
- Reporting the evaluation results (Borg & Gall, 1999, p.744-753).
What are the criteria of a good evaluation study?
• Utility: an evaluation has utility if it is informative, timely, and useful to the affected persons.
• Feasibility: the evaluation design is appropriate to the setting in which the study is to be conducted and that the design is cost-effective.
• Propriety: if the rights of persons affected by the evaluation are protected.
• Accuracy: extent to which an evaluation study has produced valid, reliable, and comprehensible information about the entity being evaluated (Borg & Gall, 1999, p.755).
What is involved in ‘quantitatively oriented evaluation’ models?
• Evaluation of the individual.
• Objectives-based evaluation for determining the merits of a curriculum or an educational program.
• Needs assessment.
• Formative and summative evaluation.
(Borg & Gall, 1999, 758-767).
Evaluation of the individual
• This type of research involves the assessment of students’ individual differences in intelligence and school achievement.
• It also involves evaluation of teachers, administrators, and other school personnel.
• Like assessment of students, personnel evaluation focuses on measurement of individual differences, and judgments are made by comparing the individual with a set of norms or criterion (Borg & Gall, 1999, p.759)
Objectives-based evaluation: Four Models
• Discrepancy evaluation between the objectives of a program and students’ actual achievement of the objectives (Provus, 1971).
• Cost-benefit evaluation to determine the relationship between the costs of a program and the objectives that it has achieved. Comparisons are made to determine which promotes the greatest benefits for each unit of resource expenditure (Levin, 1983).
• Behavioral objectives to measure the learner’s achievemen(Tyler,1960)
• Goal-free evaluation to discover the actual effects of the program in operation that may differ from the program developers’ stated goals (Scriven, 1973).
Needs assessment
• This type of research aims to determine a discrepancy between an existing set of conditions and a desired set of conditions.
• Educational needs can be assessed systematically using quantitative research methods.
• Personal values and standards are important determinants of needs, and they should be assessed to round out one’s understanding of needs among the groups being studied.
• Needs assessment data are usually reported as group trends (Borg & Gall, 1999, p. 763)
Formative and summative evaluation
• The function of formative evaluation is to collect data about educational products while they are still being developed. The evaluative data can be used by developers to design and modify the product (Borg & Gall, 1999).
• The summative function of evaluation occurs after the product has been fully developed. It is conducted to determine how worthwhile the final product is, especially in comparison with other competing products. Summative data are useful to educators who must make purchase or adoption decisions (Borg & Gall, 1999).
Evaluation to guide program management
• It includes context evaluation, input evaluation, process evaluation, and product evaluation (CIPP). The CIPP model shows how evaluation could contribute to the decision-making process in program management (Stufflebeam and others 1971).
• Context evaluation involves identification of problems and needs in a specific setting.
• Input evaluation concerns judgments about the resources and strategies needed to accomplish program goals and objectives.
• Process evaluation involves the collection of evaluative data once the program has been designed and put into operation.
• Product evaluation aims to determine the extent to which the goals of the program have been achieved.
What does a ‘qualitatively oriented evaluation’ model mean?
• The worth of an educational program or product depends heavily on the values and perspectives of those doing the judging.
• For example the three following models:
- Responsive evaluation (Stake, 1967)
- Adversary evaluation (positive and negative judgments about the program) (Wolf, 1975)
- Expertise-based evaluation (Eisner, 1979)
Responsive evaluation
• Focuses on the concerns, issues and values affecting the stakeholders or persons involved in the program (Stake, 1967)
• Egon, Guba & Yvonna (1989) identified four major phases that occur in evaluation:
- Initiation and organization: negotiation between the evaluator and the client.
- Identifying the concern’s issues and values of the stakeholders using questionnaires
and interviews.
- Collection of descriptive evaluation using observations, tests, interviews, etc
- Preparing reports of results and recommendations.
Adversary evaluation
• Adversary evaluation relates in certain respects to responsive evaluation (positive and negative judgments about the program) (Wolf, 1975). It uses a wide array of data.
• Four major stages:
- Generating a broad range of issues, the evaluation team surveys various groups involved in the program (users, managers, funding agencies, etc).
- Reducing the list of issues to a manageable number.
- Forming two opposite evaluation teams (the adversaries) and provides them an opportunity to prepare arguments in favor of or in opposition to the program on each issue.
- Conducting pre-hearing sessions and a formal hearing in which the adversarial teams present their arguments and evidence before the program’s decision makers (p.774).
Expertise-based evaluation
• Expertise-based evaluation or educational connoisseurship and criticism or judgment about the worth of a program made by experts (Eisner, 1979)
• One aspect of connoisseurship is the process of appreciating (in the sense of becoming aware of) the qualities of an educational program and their meaning. This expertise is similar to that of an art critic who has special appreciation of an art work because of intensive study of related art works and of art theory.
• The other aspect of the method is criticism, which is the process of describing and evaluating that which has been appreciated. The validity of educational criticism depends heavily on the expertise of the evaluator.
Differences between Historical and Evaluation Research
• Historical research aims to assess the worth and meaning of historical sources, documents, records, relics, oral history, etc. The search is for facts relating to questions about the past, the interpretation of these facts and its significance for the present.
• Evaluation research aims to assess the merit, value, or worth of educational programs and materials of any level of schooling. It facilitates decision-making concerning policy, management, or political strategy to improve educational matters.
Conclusion
• Each type of research addresses different types of questions, and each one is necessary for advancing the field of education. The decision to undertake one of these types of research will depend primarily on the interests of the study. However, both, historical and evaluation research draw to varying degrees on the qualitative and quantitative traditions of research.
• In quantitative evaluation research, objectives provide the criteria for judging the merits of the product, e.g., publication and cost, physical properties, content, instructional properties, etc. In qualitative research, the worth of an educational program or product depends heavily on the values and perspectives of researchers.
• In historical research the historian discovers objective data but also can interpret and critique, making personal observations on the worth & value of findings.
Thursday, 4 June 2009
Summary #1: Statistical Techniques (for processing and analysis of data)by Nelson Dordelly-Rosales
References:
Gall, M. D., Gall, J. P. & Borg, W. R. (1999). Educational research: An introduction (6th ed.). Toronto, ON: Allyn & Bacon.
Taylor, J. K., and Cihon, C. (2007). Statistical techniques for data analysis. (2nd ed.). New York: Chapman & Hall/CRC.
Key terms and definitions
Research Design: refers to all procedures selected by a researcher for studying a particular set of questions or hypotheses. In this process the researcher creates an empirical test to support or refute a hypothesis. The process of designing a research study has several steps: conclusions from previous studies, rationale or theory, questions and hypotheses (or suggested explanation or a reasoned proposal predicting a cause), design, gathering the data, summarizing the data and determining the statistical significance of the results, conclusions and beginning of next study.
Quantitative research: using statistical methods typically begins with the collection of data based on a theory or hypothesis, followed by the application of descriptive or inferential statistical methods. Descriptive statistics: also called summary statistics are used to “describe” the data we have collected on a research sample. The main descriptive statistics are: the mean, median, and standard deviation; they are used to indicate the average score and the variability of scores for the sample. Inferential statistics: are used to make inferences from sample statistics to the population parameters. It includes: sampling distributions and confidence intervals, one and two sample topics (comparison of means, ratio of variances), propagation of error in a derived or calculated value, regression analysis, testing hypotheses, drawing inferences.
Types of Quantitative Research Designs: Descriptive studies and studies aimed at discovering causal relationships (causal-comparison, correlation, or experiment). Causal-Comparison: refers to causal or functional relationships between variables (the way in which variables influence or affect each other). The causal-comparative method is aimed at the discovery of possible causes for the phenomenon being studied by comparing subjects in whom a characteristic is present with similar subjects in whom it is absent or present to a lesser degree. Experimental research design: is ideally suited to establish causal relationships if proper controls are used. The key feature of experimental research is that a treatment variable is manipulated. Correlational studies: include all research projects in which an attempt is made to discover or clarify relationships through the use of correlation coefficients. It tells the researcher the magnitude of the relationship between two variables.
Response to questions
The chapter aims to describe and explain the main statistical techniques for processing and analysis of data. It helps the reader to become familiar not only with the language, principles, reasoning and methodologies of statistical techniques of quantitative (research that is rooted in the positivistic approach to scientific inquiry) but also of qualitative (observation, ethnographic interview, survey) research methods
The specific topic of the chapter is the description of three main types of statistical techniques are descriptive inferential, and tests statistics. The chapter also deals with measurements in educational research usually expressed in different types of scores.
The overall purpose is to help the researchers in understanding four kinds of information about statistical tools: (1) what they should know about statistics and what statistical tools are available? (2) Under what conditions each tool is used? (3) What the statistical results mean? And (4) how the statistical calculations are made?
In general, the author is saying that we need to analyze research results effectively. The authors suggest that we make maximum use of data collected and apply appropriate statistical techniques when analyzing our research data.
This information is interesting because statistical techniques are used to a) describe educational phenomena; c) make inferences from samples to populations, d) identify psychometric properties of tests, e) apply mathematical procedures involved in the use of statistical formulas: measures of central tendency, of variability, correlation, tests, etc. A sound research plan is the one that specifies the statistical tools to be used in the data analysis. Statistical tools should be decided upon before data have been collected because different tools may require that the data be collected in different forms.
Closing summary
As researchers we should know that a statistical research project aims to investigate causality, and in particular to draw a conclusion on the effect of independent variables (predictors) on dependent variables (responses); and that there are two types of causal statistical studies: experimental and observational. An experimental study involves taking measurements, manipulating the system, and then taking additional measurements using the same procedure to determine if the manipulation has modified the values of the measurements. In observational study we just gather data and investigate correlations between predictors and response. Basic steps of an experiment are: planning, design, summary (descriptive statistics), reaching consensus (inferential statistics), and documenting-presenting results. We should be careful in choosing the right statistical tools to be used in the data analysis (see visual graphic) because occasionally a statistical tool is used when the data to be analyzed do not meet the conditions required for the tool in question. After appropriate statistical tools have been selected and applied to the research data, the next step is to interpret the results. Interpretation must be done with care. Fortunately today, as researchers, we have access to the techniques and technology we need to analyze statistical data. Computers can help with data analysis techniques that were once beyond the calculation reach of even professional statisticians. All we need is practical guidance on how to use them. For example, measurement analysis can be performed through the MINITAB Statistical Software, which improves presentation of the results.
Reflection
The purpose, structure, and general principles of educational research methodology, quantitative and qualitative measurement analysis, and most importantly, the statistical techniques are valuable to everyone who produces, uses, or evaluates data. Descriptive statistics help us to summarize the data we have collected on a research sample, and inferential statistics techniques are important in educational research because they allow us to generalize from a sample or samples, to reach conclusions about large populations. We must be aware of misuses and abuses of statistics in research. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations (e.g. instrument precision) which propagate to the combination of variables in the function. Statistical techniques can help us to examine propagation of error if we are not careful. Statistical techniques are useful tools for collecting, classifying and using statistics in research, methods of using numerical facts systematically collected. They are tools for designing research, processing and analyzing data and drawing inferences or conclusions.
A few years ago, I had an excellent research experience using statistics in a survey. I worked in a survey of teaching methods of History teachers at the Ministry of Education in Venezuela. The survey system included the most commonly used survey descriptive statistics, including: percents, medians, means, and standard deviations. The results were presented in tables and we interpreted and drew some conclusions. We established significant differences between data points. Statistical Software was used to improve quality in presenting the final results. We interpreted some similarities and some differences between the teachers of public and private schools. We found that about 60% involved used of the traditional “lecture” method. As a result, the Ministry of Education developed training workshops on a variety of teaching methods.
Summary # 2: Collecting Research Data
References:
Gall, M. D., Gall, J. P. & Borg, W. R. (1999). Educational research: An introduction (6th ed.). Toronto, ON: Allyn & Bacon.
Wallen, E., and Fraenkel, J.C. (2007). Educational Research: A Guide to the Process. (2nd ed.). London: Lawrence Erlbaum Associates.
Key terms and definitions
Data-Collection Tools: questionnaires, interviews, and observations aimed at gathering similar kinds of data, are the most common instruments for data collection in survey research. Other techniques for collecting survey information are tests, self report measures, and examination of records.
Survey research is a distinctive research methodology for systematic data collection. Surveys often are used to simply to collect information, such as the percentage of respondents who hold or do not hold a certain opinion. Surveys can also be used to explore relationships between different variables. The cross-sectional survey: standard information is collected at one point in time from a sample drawn from a predetermined population. When information is collected from the entire population, the survey is called a census. The longitudinal survey: in the longitudinal survey, data are collected from respondents at different points in time in order to study changes or explore time-ordered associations. Three longitudinal designs are commonly employed in survey research: trend studies, cohort studies, and panel studies. Trend studies: in this design a given general population is sampled at each data-collection point. The same individuals are not surveyed, but each sample represents the same population. For example, each year we survey teachers of History and would compare from year to year. Cohort studies: in this design a specific population is followed over a period of time. Panel studies: in this design the researcher selects a sample at the outset of the study and then at each subsequent data-collection point the same individuals are surveyed.
Survey Interview: involves the collection of data through direct verbal interaction between individuals. It permits direct follow-up (in person, telephone, computer, and recording); obtain more data using self-check test, and greater clarity than questionnaires.
Collecting observational data: three types of observational variables may be distinguished: descriptive, inferential and evaluative.
Content Synthesis
The aim of this chapter is to explain different techniques for collecting research data or information. Some of these methods depend on the methodology and the theoretical assumptions used in the research. There is a tendency for researchers in the functionalist, positivist or ‘scientific’ paradigm to collect hard objective numbers by observation, experimentation, and extraction from published sources, questionnaires and structured interviews. They emphasise quantitative techniques over qualitative methods. Law and humanistic researchers in the interpretative and radical humanist paradigms use qualitative methods. However, matching methodologies and methods is the current tendency in educational research. Mixed method research paradigm and triangulation studies are ways to make research studies more robust and rigorous by verifying results through different methods, thus ensuring that the results are not a function of the research method.
The specific topic of the chapter is data-collection tools in surveys to obtain standardized information from all subjects in the sample. The focus is on survey research, a distinctive research methodology for systematic data collection. Information to be collected is assumed to be quantifiable. The chapter helps graduate students in education learn steps needed to carry out the collection of data process. The overall purpose is the improvement of educational research through an appropriate collection of data.
What are the authors saying? The chapter provides an explanation of the techniques for preparing and using tools of survey research, considering the various types of knowledge that can be generated by analysis of survey data. Collecting research data properly is worth doing. Survey research leads to new knowledge and this knowledge contributes to improve education in different ways.
By selecting and using gathering techniques and survey research in an appropriate way, we can avoid mistakes sometimes made by researchers. Cautions include several threats to the validity of the instrumentation process. For example, an extraneous event may cause the respondents to answer differently. It also warns that for our theses we need to obtain university IRB approval for the collection of data from human subjects.
Closing summary
In general, research data may be categorised as primary and secondary data. Primary data are data generated by the researcher using data gathering techniques (questionnaires, interviews, etc). Secondary data are those that have been generated by others and are included in data-sets, case materials, computer or manual databases or published by various private (e.g. Annual Reports of companies), public organisations or government departments (official statistics by the Statistical Office) and International Organisations such the International Monetary Fund and the World Bank and the United Nations, among others. The chapter mainly focuses on what is survey research, what are the data collection tools and what are the types of research survey. It describes the cross-sectional survey and the longitudinal survey. It explains three ways for collecting research data through longitudinal survey, specifically trend studies, cohort studies and panel studies. It also provides excellent examples to illustrate the major characteristics of each type of research survey and describes advantages and disadvantages of each one.
Reflection
The major purpose of surveys is to describe certain characteristics or variables of a population. Some characteristics are: 1) information is collected from a group of people (rather than from every member of the population) in order to describe some aspects (such as abilities, opinions, attitudes, beliefs, and/or knowledge) of the population of which that group is a part.
2) The main way in which the information is collected is through asking questions through questionnaires or/and interviews. The answers by respondents constitute the data of the study. Among major advantages of survey research are reduced cost and that information collected can be of various types. Among major disadvantages are biases inherent in the data collection process and possible security or confidentiality concerns. I was able to participate in 2001 in a longitudinal survey (cohort studies survey) at Catholic University in Venezuela. In this design, the College sampled the graduating class throughout a couple of years using questionnaires. I realized that there are unique problems and pressures that affect longitudinal studies because of the extended period of time over which data is collected in comparison to cross-sectional studies. One danger is that the issues studied, and the measures and theories used, may become obsolete over the course of the study. Also, the survey was too long and a number of participants left the last questions without response. Reading Borg & Gall (1999) made me reflect on the importance of carefully planning research surveys (and short-term uses for the data should be planned ahead). Indeed, the success is dependent on clearly defining long term goals, specific variables, limitations and delimitations in the generalizability of findings. In order to guard against obsolesced longitudinal research should be theoretically broad-minded and mixed. It is important a carefully selected sample of respondents and selecting a large enough sample size. In order to make legitimate conclusions about the specified population, sampling must be representative and valid statistical assumptions must be present.
Summary # 3: Collecting Research Data with Questionnaires
References:
Gall, M. D., Gall, J. P. & Borg, W. R. (1999). Educational research: An introduction (6th ed.). Toronto, ON: Allyn & Bacon.
Fleming, C. M., and Bowden, M. (2009) “Web-based surveys as an alternative to traditional mail methods” Journal of Environmental Management, 90, 1, pp. 284-292
Key terms and definitions
Questionnaire: can be defined as a set of questions to which participants record their answers, usually within largely closely defined alternatives (Fleming and Bowden, 2009). There are mainly three types: postal or mail questionnaire, online questionnaires and personally administered questionnaires.
Mail Questionnaires: the questionnaires are sent (using the post office) to the sample participants, usually with a pre-paid self-addressed envelope to encourage response. Some advantages are low cost, and anonymity; the respondents can give more thought to the questions; researcher bias is less as compared to administered questionnaires. Some disadvantages are possible misinterpretation of questions, possible problems with language, and lower level response rate (usually is small which requires a second or even a third mailing)
Hand-delivering or personal administered questionnaires: the researchers personally administer the questionnaire to the participants, usually at the participants’ workplace, residence or any other adequate location. Some advantages are faster response as compared to the mail questionnaires, the research can clarify questions to the participant, the researcher can motivate honest answers by emphasising the participants’ contribution, and personal persuasion increases response rate. A possible disadvantage is that the researcher may introduce his/her personal bias and the responses may vary as compared to mail questionnaires.
Online questionnaires: Using online questionnaires enables the researcher to collect large volumes of data quickly and at low cost; direct access to research populations; it is possible to make them friendly and attractive, thus encouraging higher response rates and data entry errors are often low. Some disadvantages are sample bias, technology knowledge of respondents, anonymity, privacy and confidentiality (Fleming and Bowden, 2009).
Content synthesis
The chapter aims to help the reader to understand the main steps in conducting questionnaire surveys, the rules that researchers should follow to guarantee high-quality surveys and the importance of careful planning and sound methodology.
The steps in conducting a questionnaire survey are the following: (1) defining objectives, (2) selecting a sample, (3) writing items, (4) constructing the questionnaire, (5) pretesting, (6) preparing a letter of transmittal, (7) sending out questionnaire and follow-ups, and (8) analysis of the results and preparing the research report. Surveyors should try to make questionnaires attractive, and easy to complete. Also, they should number the questionnaire items and pages; put name and address of person to whom form should be returned; include brief, clear instructions; use examples before any items, etc.
The overall purpose of the chapter is to guide, especially to graduate students of education, and teachers, in how to apply the necessary tools, procedures and techniques for effectively designing and conducting a survey questionnaire in educational research.
The authors are saying that given the objectives of a survey, we as graduate students should know the rules related to questionnaire format and how to write both closed-form and open-ended questionnaire items to measure them.
Information provided is interesting because questionnaires are useful instruments to obtaining access to organisations and, more specifically, to obtain evidence of consensus among the respondents on different issues. With careful planning and sound methodology, the mail questionnaire can be a very valuable research tool in education.
Closing summary
Chapter 8 provides a clear explanation of the steps in conducting a questionnaire survey and the set of rules that researchers should apply when conducting it. Among the rules that we should apply when conducting a questionnaire survey are: to define the problem clearly, list objectives, construct neat items, make the questionnaire attractive, put name and address of person to whom form should be returned. Regarding the form, the questionnaire as such should include brief, clear instructions, use examples before any items, organize the questions in logical sequence and it should be easy to complete. In relation to the organization of content, the questionnaire when moving to a new topic, it should include a transitional sentence to help respondents switch their trains of thought, begin with a few interesting and nonthreatening items, do not put important items at the end and put threatening or difficult questions near the end; and items should meaningful to the respondents. Finally, if there is attitude measurements you should investigate respondents’ familiarity (prove the questionnaire previously with a small sample), and watch out anonymity because non-respondent individuals cannot be identified (but all depends if anonymity is necessary to achieve the specific goals). The authors recommend pre-testing the questionnaire, which requires doing the following: select a sample of individuals from a population similar to our subjects and ask them to repeat their understanding of the meaning of the question in their own words to make sure items are clearly stated; apply the questionnaire to a sample to check the % of replies. Read the subjects’ comments and make necessary changes to improve it; make a brief analysis of the pre-test results. Make necessary changes (adding questions, correcting words, etc). Prepare a letter of transmittal; the authors say that it is important to pre-contact a sample to assure cooperation. Letter must be brief, precise, explain good reasons, assure privacy and confidentiality, if possible, and associate it with some professional institution or organization (authority symbol).
Reflection
The questionnaire can be a very valuable research tool in education. It is a data collection tool in which written questions are presented that are to be answered by a selected sample of respondents. Collecting research data with questionnaires requires careful planning and sound methodology. The authors describe in detail all the steps that must be taken to carry out a successful questionnaire survey.
The key in carrying out a satisfactory questionnaire study is to begin by clearly defining the research problem and list specific objectives or hypotheses. That is the researcher needs to have a clear understanding of what s/he hopes to obtain from the results. Otherwise, it will be very difficult to make right decisions regarding selection of a sample, construction of the questionnaire, and methods for analyzing the data. Identifying the target population and selecting a sample is also a key to guarantee success in conducting a questionnaire survey.
The researcher also must be very careful in designing or constructing items. The qualities of a good questionnaire survey are the following: Clarity, short items, avoid items that include two separate ideas in the same item, do not use technical terms, jargon or confuse words, ask general questions first and then specific questions and it is important to avoid biased or leading questions (the subject is eager to please), and avoid questions that may be psychologically threatening (low moral). The authors suggest sending questionnaires and follow-ups by using special delivery mail. The questionnaires must be neat, and carefully planned.
Below, there are two diagrams that synthesize statistical techniques and a glossary that can help in understanding some of the terms that are useful in conducting research.
Reference:
Vern Lindberg (2000) “Uncertainties and Error Propagation” http://www.rit.edu/cos/uphysics/uncertainties/Uncertaintiespart1.html#range
References:
Gall, M. D., Gall, J. P. & Borg, W. R. (1999). Educational research: An introduction (6th ed.). Toronto, ON: Allyn & Bacon.
Taylor, J. K., and Cihon, C. (2007). Statistical techniques for data analysis. (2nd ed.). New York: Chapman & Hall/CRC.
Key terms and definitions
Research Design: refers to all procedures selected by a researcher for studying a particular set of questions or hypotheses. In this process the researcher creates an empirical test to support or refute a hypothesis. The process of designing a research study has several steps: conclusions from previous studies, rationale or theory, questions and hypotheses (or suggested explanation or a reasoned proposal predicting a cause), design, gathering the data, summarizing the data and determining the statistical significance of the results, conclusions and beginning of next study.
Quantitative research: using statistical methods typically begins with the collection of data based on a theory or hypothesis, followed by the application of descriptive or inferential statistical methods. Descriptive statistics: also called summary statistics are used to “describe” the data we have collected on a research sample. The main descriptive statistics are: the mean, median, and standard deviation; they are used to indicate the average score and the variability of scores for the sample. Inferential statistics: are used to make inferences from sample statistics to the population parameters. It includes: sampling distributions and confidence intervals, one and two sample topics (comparison of means, ratio of variances), propagation of error in a derived or calculated value, regression analysis, testing hypotheses, drawing inferences.
Types of Quantitative Research Designs: Descriptive studies and studies aimed at discovering causal relationships (causal-comparison, correlation, or experiment). Causal-Comparison: refers to causal or functional relationships between variables (the way in which variables influence or affect each other). The causal-comparative method is aimed at the discovery of possible causes for the phenomenon being studied by comparing subjects in whom a characteristic is present with similar subjects in whom it is absent or present to a lesser degree. Experimental research design: is ideally suited to establish causal relationships if proper controls are used. The key feature of experimental research is that a treatment variable is manipulated. Correlational studies: include all research projects in which an attempt is made to discover or clarify relationships through the use of correlation coefficients. It tells the researcher the magnitude of the relationship between two variables.
Response to questions
The chapter aims to describe and explain the main statistical techniques for processing and analysis of data. It helps the reader to become familiar not only with the language, principles, reasoning and methodologies of statistical techniques of quantitative (research that is rooted in the positivistic approach to scientific inquiry) but also of qualitative (observation, ethnographic interview, survey) research methods
The specific topic of the chapter is the description of three main types of statistical techniques are descriptive inferential, and tests statistics. The chapter also deals with measurements in educational research usually expressed in different types of scores.
The overall purpose is to help the researchers in understanding four kinds of information about statistical tools: (1) what they should know about statistics and what statistical tools are available? (2) Under what conditions each tool is used? (3) What the statistical results mean? And (4) how the statistical calculations are made?
In general, the author is saying that we need to analyze research results effectively. The authors suggest that we make maximum use of data collected and apply appropriate statistical techniques when analyzing our research data.
This information is interesting because statistical techniques are used to a) describe educational phenomena; c) make inferences from samples to populations, d) identify psychometric properties of tests, e) apply mathematical procedures involved in the use of statistical formulas: measures of central tendency, of variability, correlation, tests, etc. A sound research plan is the one that specifies the statistical tools to be used in the data analysis. Statistical tools should be decided upon before data have been collected because different tools may require that the data be collected in different forms.
Closing summary
As researchers we should know that a statistical research project aims to investigate causality, and in particular to draw a conclusion on the effect of independent variables (predictors) on dependent variables (responses); and that there are two types of causal statistical studies: experimental and observational. An experimental study involves taking measurements, manipulating the system, and then taking additional measurements using the same procedure to determine if the manipulation has modified the values of the measurements. In observational study we just gather data and investigate correlations between predictors and response. Basic steps of an experiment are: planning, design, summary (descriptive statistics), reaching consensus (inferential statistics), and documenting-presenting results. We should be careful in choosing the right statistical tools to be used in the data analysis (see visual graphic) because occasionally a statistical tool is used when the data to be analyzed do not meet the conditions required for the tool in question. After appropriate statistical tools have been selected and applied to the research data, the next step is to interpret the results. Interpretation must be done with care. Fortunately today, as researchers, we have access to the techniques and technology we need to analyze statistical data. Computers can help with data analysis techniques that were once beyond the calculation reach of even professional statisticians. All we need is practical guidance on how to use them. For example, measurement analysis can be performed through the MINITAB Statistical Software, which improves presentation of the results.
Reflection
The purpose, structure, and general principles of educational research methodology, quantitative and qualitative measurement analysis, and most importantly, the statistical techniques are valuable to everyone who produces, uses, or evaluates data. Descriptive statistics help us to summarize the data we have collected on a research sample, and inferential statistics techniques are important in educational research because they allow us to generalize from a sample or samples, to reach conclusions about large populations. We must be aware of misuses and abuses of statistics in research. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations (e.g. instrument precision) which propagate to the combination of variables in the function. Statistical techniques can help us to examine propagation of error if we are not careful. Statistical techniques are useful tools for collecting, classifying and using statistics in research, methods of using numerical facts systematically collected. They are tools for designing research, processing and analyzing data and drawing inferences or conclusions.
A few years ago, I had an excellent research experience using statistics in a survey. I worked in a survey of teaching methods of History teachers at the Ministry of Education in Venezuela. The survey system included the most commonly used survey descriptive statistics, including: percents, medians, means, and standard deviations. The results were presented in tables and we interpreted and drew some conclusions. We established significant differences between data points. Statistical Software was used to improve quality in presenting the final results. We interpreted some similarities and some differences between the teachers of public and private schools. We found that about 60% involved used of the traditional “lecture” method. As a result, the Ministry of Education developed training workshops on a variety of teaching methods.
Summary # 2: Collecting Research Data
References:
Gall, M. D., Gall, J. P. & Borg, W. R. (1999). Educational research: An introduction (6th ed.). Toronto, ON: Allyn & Bacon.
Wallen, E., and Fraenkel, J.C. (2007). Educational Research: A Guide to the Process. (2nd ed.). London: Lawrence Erlbaum Associates.
Key terms and definitions
Data-Collection Tools: questionnaires, interviews, and observations aimed at gathering similar kinds of data, are the most common instruments for data collection in survey research. Other techniques for collecting survey information are tests, self report measures, and examination of records.
Survey research is a distinctive research methodology for systematic data collection. Surveys often are used to simply to collect information, such as the percentage of respondents who hold or do not hold a certain opinion. Surveys can also be used to explore relationships between different variables. The cross-sectional survey: standard information is collected at one point in time from a sample drawn from a predetermined population. When information is collected from the entire population, the survey is called a census. The longitudinal survey: in the longitudinal survey, data are collected from respondents at different points in time in order to study changes or explore time-ordered associations. Three longitudinal designs are commonly employed in survey research: trend studies, cohort studies, and panel studies. Trend studies: in this design a given general population is sampled at each data-collection point. The same individuals are not surveyed, but each sample represents the same population. For example, each year we survey teachers of History and would compare from year to year. Cohort studies: in this design a specific population is followed over a period of time. Panel studies: in this design the researcher selects a sample at the outset of the study and then at each subsequent data-collection point the same individuals are surveyed.
Survey Interview: involves the collection of data through direct verbal interaction between individuals. It permits direct follow-up (in person, telephone, computer, and recording); obtain more data using self-check test, and greater clarity than questionnaires.
Collecting observational data: three types of observational variables may be distinguished: descriptive, inferential and evaluative.
Content Synthesis
The aim of this chapter is to explain different techniques for collecting research data or information. Some of these methods depend on the methodology and the theoretical assumptions used in the research. There is a tendency for researchers in the functionalist, positivist or ‘scientific’ paradigm to collect hard objective numbers by observation, experimentation, and extraction from published sources, questionnaires and structured interviews. They emphasise quantitative techniques over qualitative methods. Law and humanistic researchers in the interpretative and radical humanist paradigms use qualitative methods. However, matching methodologies and methods is the current tendency in educational research. Mixed method research paradigm and triangulation studies are ways to make research studies more robust and rigorous by verifying results through different methods, thus ensuring that the results are not a function of the research method.
The specific topic of the chapter is data-collection tools in surveys to obtain standardized information from all subjects in the sample. The focus is on survey research, a distinctive research methodology for systematic data collection. Information to be collected is assumed to be quantifiable. The chapter helps graduate students in education learn steps needed to carry out the collection of data process. The overall purpose is the improvement of educational research through an appropriate collection of data.
What are the authors saying? The chapter provides an explanation of the techniques for preparing and using tools of survey research, considering the various types of knowledge that can be generated by analysis of survey data. Collecting research data properly is worth doing. Survey research leads to new knowledge and this knowledge contributes to improve education in different ways.
By selecting and using gathering techniques and survey research in an appropriate way, we can avoid mistakes sometimes made by researchers. Cautions include several threats to the validity of the instrumentation process. For example, an extraneous event may cause the respondents to answer differently. It also warns that for our theses we need to obtain university IRB approval for the collection of data from human subjects.
Closing summary
In general, research data may be categorised as primary and secondary data. Primary data are data generated by the researcher using data gathering techniques (questionnaires, interviews, etc). Secondary data are those that have been generated by others and are included in data-sets, case materials, computer or manual databases or published by various private (e.g. Annual Reports of companies), public organisations or government departments (official statistics by the Statistical Office) and International Organisations such the International Monetary Fund and the World Bank and the United Nations, among others. The chapter mainly focuses on what is survey research, what are the data collection tools and what are the types of research survey. It describes the cross-sectional survey and the longitudinal survey. It explains three ways for collecting research data through longitudinal survey, specifically trend studies, cohort studies and panel studies. It also provides excellent examples to illustrate the major characteristics of each type of research survey and describes advantages and disadvantages of each one.
Reflection
The major purpose of surveys is to describe certain characteristics or variables of a population. Some characteristics are: 1) information is collected from a group of people (rather than from every member of the population) in order to describe some aspects (such as abilities, opinions, attitudes, beliefs, and/or knowledge) of the population of which that group is a part.
2) The main way in which the information is collected is through asking questions through questionnaires or/and interviews. The answers by respondents constitute the data of the study. Among major advantages of survey research are reduced cost and that information collected can be of various types. Among major disadvantages are biases inherent in the data collection process and possible security or confidentiality concerns. I was able to participate in 2001 in a longitudinal survey (cohort studies survey) at Catholic University in Venezuela. In this design, the College sampled the graduating class throughout a couple of years using questionnaires. I realized that there are unique problems and pressures that affect longitudinal studies because of the extended period of time over which data is collected in comparison to cross-sectional studies. One danger is that the issues studied, and the measures and theories used, may become obsolete over the course of the study. Also, the survey was too long and a number of participants left the last questions without response. Reading Borg & Gall (1999) made me reflect on the importance of carefully planning research surveys (and short-term uses for the data should be planned ahead). Indeed, the success is dependent on clearly defining long term goals, specific variables, limitations and delimitations in the generalizability of findings. In order to guard against obsolesced longitudinal research should be theoretically broad-minded and mixed. It is important a carefully selected sample of respondents and selecting a large enough sample size. In order to make legitimate conclusions about the specified population, sampling must be representative and valid statistical assumptions must be present.
Summary # 3: Collecting Research Data with Questionnaires
References:
Gall, M. D., Gall, J. P. & Borg, W. R. (1999). Educational research: An introduction (6th ed.). Toronto, ON: Allyn & Bacon.
Fleming, C. M., and Bowden, M. (2009) “Web-based surveys as an alternative to traditional mail methods” Journal of Environmental Management, 90, 1, pp. 284-292
Key terms and definitions
Questionnaire: can be defined as a set of questions to which participants record their answers, usually within largely closely defined alternatives (Fleming and Bowden, 2009). There are mainly three types: postal or mail questionnaire, online questionnaires and personally administered questionnaires.
Mail Questionnaires: the questionnaires are sent (using the post office) to the sample participants, usually with a pre-paid self-addressed envelope to encourage response. Some advantages are low cost, and anonymity; the respondents can give more thought to the questions; researcher bias is less as compared to administered questionnaires. Some disadvantages are possible misinterpretation of questions, possible problems with language, and lower level response rate (usually is small which requires a second or even a third mailing)
Hand-delivering or personal administered questionnaires: the researchers personally administer the questionnaire to the participants, usually at the participants’ workplace, residence or any other adequate location. Some advantages are faster response as compared to the mail questionnaires, the research can clarify questions to the participant, the researcher can motivate honest answers by emphasising the participants’ contribution, and personal persuasion increases response rate. A possible disadvantage is that the researcher may introduce his/her personal bias and the responses may vary as compared to mail questionnaires.
Online questionnaires: Using online questionnaires enables the researcher to collect large volumes of data quickly and at low cost; direct access to research populations; it is possible to make them friendly and attractive, thus encouraging higher response rates and data entry errors are often low. Some disadvantages are sample bias, technology knowledge of respondents, anonymity, privacy and confidentiality (Fleming and Bowden, 2009).
Content synthesis
The chapter aims to help the reader to understand the main steps in conducting questionnaire surveys, the rules that researchers should follow to guarantee high-quality surveys and the importance of careful planning and sound methodology.
The steps in conducting a questionnaire survey are the following: (1) defining objectives, (2) selecting a sample, (3) writing items, (4) constructing the questionnaire, (5) pretesting, (6) preparing a letter of transmittal, (7) sending out questionnaire and follow-ups, and (8) analysis of the results and preparing the research report. Surveyors should try to make questionnaires attractive, and easy to complete. Also, they should number the questionnaire items and pages; put name and address of person to whom form should be returned; include brief, clear instructions; use examples before any items, etc.
The overall purpose of the chapter is to guide, especially to graduate students of education, and teachers, in how to apply the necessary tools, procedures and techniques for effectively designing and conducting a survey questionnaire in educational research.
The authors are saying that given the objectives of a survey, we as graduate students should know the rules related to questionnaire format and how to write both closed-form and open-ended questionnaire items to measure them.
Information provided is interesting because questionnaires are useful instruments to obtaining access to organisations and, more specifically, to obtain evidence of consensus among the respondents on different issues. With careful planning and sound methodology, the mail questionnaire can be a very valuable research tool in education.
Closing summary
Chapter 8 provides a clear explanation of the steps in conducting a questionnaire survey and the set of rules that researchers should apply when conducting it. Among the rules that we should apply when conducting a questionnaire survey are: to define the problem clearly, list objectives, construct neat items, make the questionnaire attractive, put name and address of person to whom form should be returned. Regarding the form, the questionnaire as such should include brief, clear instructions, use examples before any items, organize the questions in logical sequence and it should be easy to complete. In relation to the organization of content, the questionnaire when moving to a new topic, it should include a transitional sentence to help respondents switch their trains of thought, begin with a few interesting and nonthreatening items, do not put important items at the end and put threatening or difficult questions near the end; and items should meaningful to the respondents. Finally, if there is attitude measurements you should investigate respondents’ familiarity (prove the questionnaire previously with a small sample), and watch out anonymity because non-respondent individuals cannot be identified (but all depends if anonymity is necessary to achieve the specific goals). The authors recommend pre-testing the questionnaire, which requires doing the following: select a sample of individuals from a population similar to our subjects and ask them to repeat their understanding of the meaning of the question in their own words to make sure items are clearly stated; apply the questionnaire to a sample to check the % of replies. Read the subjects’ comments and make necessary changes to improve it; make a brief analysis of the pre-test results. Make necessary changes (adding questions, correcting words, etc). Prepare a letter of transmittal; the authors say that it is important to pre-contact a sample to assure cooperation. Letter must be brief, precise, explain good reasons, assure privacy and confidentiality, if possible, and associate it with some professional institution or organization (authority symbol).
Reflection
The questionnaire can be a very valuable research tool in education. It is a data collection tool in which written questions are presented that are to be answered by a selected sample of respondents. Collecting research data with questionnaires requires careful planning and sound methodology. The authors describe in detail all the steps that must be taken to carry out a successful questionnaire survey.
The key in carrying out a satisfactory questionnaire study is to begin by clearly defining the research problem and list specific objectives or hypotheses. That is the researcher needs to have a clear understanding of what s/he hopes to obtain from the results. Otherwise, it will be very difficult to make right decisions regarding selection of a sample, construction of the questionnaire, and methods for analyzing the data. Identifying the target population and selecting a sample is also a key to guarantee success in conducting a questionnaire survey.
The researcher also must be very careful in designing or constructing items. The qualities of a good questionnaire survey are the following: Clarity, short items, avoid items that include two separate ideas in the same item, do not use technical terms, jargon or confuse words, ask general questions first and then specific questions and it is important to avoid biased or leading questions (the subject is eager to please), and avoid questions that may be psychologically threatening (low moral). The authors suggest sending questionnaires and follow-ups by using special delivery mail. The questionnaires must be neat, and carefully planned.
Below, there are two diagrams that synthesize statistical techniques and a glossary that can help in understanding some of the terms that are useful in conducting research.
Reference:
Vern Lindberg (2000) “Uncertainties and Error Propagation” http://www.rit.edu/cos/uphysics/uncertainties/Uncertaintiespart1.html#range
Saturday, 23 May 2009
About Surveys
Surveys provide the chance to express participants’ ideas and we can get precise answers to our questions. I think the secret of an excellent survey is the structure of the questions and having very clear the goals of the study. One of the best surveys I have reviewed is the following:
Rosales-Dordelly, C.L. and Short, Edmund C. (1985) Curriculum Professors’ Specialized Knowledge. Lanham, MD: University Press of America. This book reports a survey among professors between Canada and the US.
# 1 Validity, Reliability, Trustworthiness
A. Summary by Nelson Dordelly-Rosales: McMillan (2007) summarizes and provides suggestions to monitor the threats that put at risk internal validity in randomized field trial studies or control trials or, on quasi-experiments in which there is “equating” of pre-test differences: unit of randomization and local history (equivalence of the groups), intervention-treatment (fidelity, consistency with theory), differential attrition (mortality-tracking participants), testing (instrumentation variations and procedures), subject effects (selection-maturation interaction), diffusion of intervention (treatment, asking appropriate questions), experimenter effects (checking values, biases, needs) and novelty effects (changes to normal routines). It is the responsibility of researchers “to include design features that will lessen the probability that the threat is plausible” (p.5)
B. Experience: As graduate student, my recent research activity is in the area of interpretation/construction (my theses). But, I recently reviewed an excellent research by Chauncey Monte-Sano (2005) which resembles another study done at the Ministry of Education in Venezuela on comparative studies. The researchers monitored and supervised the threats to validity and trustworthiness through intervention fidelity (pre- and post-test essays, interviews, observations, teacher feedback, assignments, and readings; analysis of students’ progress within each classroom and between both classrooms, assessing any changed observed in the students’ work).
C. Suggestions: I think that in studies in which there are pre-test differences, it is necessary to include design features to monitor all plausible threats. Particularly, I would suggest increasing the number of homogeneous comparative groups.
# 2 Different types of sampling methods
Summary: Cui, Wei Wei (2003)aims to help the readers to understand what is sampling (a technique of selecting a representative part of a population for the purpose of drawing conclusions of the whole population), the different types of sampling (probabilistic, non-probabilistic, simple, systematic, stratified, cluster, purposeful), potential sources of error (sampling error, non-coverage error, non-response error, and measurement error) and how to reduce error in mail surveys and interviews (avoid unrepresentative number, enlarging the sample size; avoid bias of interviewer or survey researcher in favouring the selection of units that have specific characteristics; improving survey return rates, etc).
The value for me as an educator and as a consumer of educational research: I should use appropriate sampling methods and an adequate response rate for a representative sample. However, I should also evaluate different factors that may affect the quality of data from a research study, for example, procedures, questions asked, validity of questionnaire, among others.
# 3 Two concepts regarding data analysis
a. Helberg (1996) warns the researchers on the paradox that statistics can produce dissimilar or contradictory results. The author provides suggestions about how to cope with sources of bias, errors in methodology, and misinterpretation of results. To that end, he explains how to assuring representative sampling and valid statistical assumptions; recommends using methods available for taking measurement error into account in some statistical models and applying more precision and accuracy in interpretation of results.
b. Oliver-Hoyo snd Dee Dee (2006) reviewed data collection through three qualitative methods to study qualitative variables (meaning constructed by individuals): surveys, journal responses and field notes. The authors are persuasive in that relying on more than two methods is invaluable to avoid gross errors when drawing conclusions in surveys.
c. How the articles’ information could be of value to you as an educator and consumer: From these readings, I learned that multiple methods of data collection and analysis (quantitative and qualitative) help us to develop a more complete view of the problem and the solution. So, in the example provided regarding accountability, I think that the right approach is integrating different “assessment strategies” so that educators can take advantage of all the information.
Surveys provide the chance to express participants’ ideas and we can get precise answers to our questions. I think the secret of an excellent survey is the structure of the questions and having very clear the goals of the study. One of the best surveys I have reviewed is the following:
Rosales-Dordelly, C.L. and Short, Edmund C. (1985) Curriculum Professors’ Specialized Knowledge. Lanham, MD: University Press of America. This book reports a survey among professors between Canada and the US.
# 1 Validity, Reliability, Trustworthiness
A. Summary by Nelson Dordelly-Rosales: McMillan (2007) summarizes and provides suggestions to monitor the threats that put at risk internal validity in randomized field trial studies or control trials or, on quasi-experiments in which there is “equating” of pre-test differences: unit of randomization and local history (equivalence of the groups), intervention-treatment (fidelity, consistency with theory), differential attrition (mortality-tracking participants), testing (instrumentation variations and procedures), subject effects (selection-maturation interaction), diffusion of intervention (treatment, asking appropriate questions), experimenter effects (checking values, biases, needs) and novelty effects (changes to normal routines). It is the responsibility of researchers “to include design features that will lessen the probability that the threat is plausible” (p.5)
B. Experience: As graduate student, my recent research activity is in the area of interpretation/construction (my theses). But, I recently reviewed an excellent research by Chauncey Monte-Sano (2005) which resembles another study done at the Ministry of Education in Venezuela on comparative studies. The researchers monitored and supervised the threats to validity and trustworthiness through intervention fidelity (pre- and post-test essays, interviews, observations, teacher feedback, assignments, and readings; analysis of students’ progress within each classroom and between both classrooms, assessing any changed observed in the students’ work).
C. Suggestions: I think that in studies in which there are pre-test differences, it is necessary to include design features to monitor all plausible threats. Particularly, I would suggest increasing the number of homogeneous comparative groups.
# 2 Different types of sampling methods
Summary: Cui, Wei Wei (2003)aims to help the readers to understand what is sampling (a technique of selecting a representative part of a population for the purpose of drawing conclusions of the whole population), the different types of sampling (probabilistic, non-probabilistic, simple, systematic, stratified, cluster, purposeful), potential sources of error (sampling error, non-coverage error, non-response error, and measurement error) and how to reduce error in mail surveys and interviews (avoid unrepresentative number, enlarging the sample size; avoid bias of interviewer or survey researcher in favouring the selection of units that have specific characteristics; improving survey return rates, etc).
The value for me as an educator and as a consumer of educational research: I should use appropriate sampling methods and an adequate response rate for a representative sample. However, I should also evaluate different factors that may affect the quality of data from a research study, for example, procedures, questions asked, validity of questionnaire, among others.
# 3 Two concepts regarding data analysis
a. Helberg (1996) warns the researchers on the paradox that statistics can produce dissimilar or contradictory results. The author provides suggestions about how to cope with sources of bias, errors in methodology, and misinterpretation of results. To that end, he explains how to assuring representative sampling and valid statistical assumptions; recommends using methods available for taking measurement error into account in some statistical models and applying more precision and accuracy in interpretation of results.
b. Oliver-Hoyo snd Dee Dee (2006) reviewed data collection through three qualitative methods to study qualitative variables (meaning constructed by individuals): surveys, journal responses and field notes. The authors are persuasive in that relying on more than two methods is invaluable to avoid gross errors when drawing conclusions in surveys.
c. How the articles’ information could be of value to you as an educator and consumer: From these readings, I learned that multiple methods of data collection and analysis (quantitative and qualitative) help us to develop a more complete view of the problem and the solution. So, in the example provided regarding accountability, I think that the right approach is integrating different “assessment strategies” so that educators can take advantage of all the information.
Article Review by Nelson Dordelly-Rosales:
1. Krauss, S., (2005) Research Paradigms and Meaning Making: A Primer.
The Qualitative Report, 10 (4), 758-770.
The paper Research Paradigms and Meaning Making: A Primer provides an introduction to some of the basic issues in attempting to work with both quantitative and qualitative research methods. It explains how qualitative data analysis can be used to organize and categorize different levels and forms of meaning. It argues that the heart of the quantitative vs. qualitative “debate” is philosophical, not methodological, and it offers an overview of the epistemological differences of quantitative and qualitative research methodologies. The article introduces the notion of meaning making in social sciences research and how it actually occurs through qualitative data analysis. It defines meaning as “the underlying motivation behind thoughts, actions and even the interpretation and application of knowledge” (Krauss, 2005, p. 763). The task of constructing meaning through qualitative data analysis is explained through a variety of perspectives and approaches.
Problem/Issue and the Importance/Significance
The focus is on the task of constructing meaning through qualitative data analysis. This paper is significant because it examines the concept of the philosophical realist paradigm and introduces the notion of “meaning making” in research methods and how meaning is generated from qualitative data analysis specifically. To that end, it explains epistemological differences between quantitative and qualitative research that allows us to understand phenomena and to get more realistic results. Some examples are also provided.
Research Question(s)
What will be the epistemological similarities and differences between quantitative and qualitative research paradigms? How can the realist philosophical paradigm accommodate both, quantitative and qualitative research paradigms? How meaning can be constructed and organized using a qualitative data analysis approach?
Sample and sample selection process
Qualitative data selection in qualitative research is intuitive to discover (not measure) potentially important insights. The author explains the need to make use of multiple research methods to optimize the data selection process, to increase both the breadth and width of data selection.
Data Collection Methods
Krauss (2005) found that qualitative data collection meaning is constructed on “a variety of levels of daily life through the exchange of ideas, interaction, and agreement between the researcher and the participants” (p. 764). The author supports his point of view through some examples from the literature, within social sciences, about how meaning can be constructed and organized using a qualitative data analysis approach (interpretivism). The author also cites a multi-year religiosity initiative as a case in which he was involved in conducting both qualitative and quantitative research to assess religiosity in the lives of young people.
Data Analysis Method
Krauss (2005) argues that qualitative data analyses in qualitative research are guided by a reflective paradigm in an attempt to acquire social knowledge. In this sense, according to the author, meaning is constructed in a variety of ways, that is, “through construction, the researcher is not a blank slate; rather s/he is an active participant in the process” (Krauss, 2005, p. 767). This means that, epistemologically, the researcher is engaged in the setting, participating in the act of ‘being with’ the respondents in their lives to generate meaning of them” (p. 769). In addition, developing themes and storylines featuring the words and experiences of participants themselves adds richness to the findings.
Limitations/Delimitations/Assumptions
Krauss (2005) explains that the realist paradigm has the unique goal of facilitating the meaning-making process, which is an important learning facilitator that has the power to encourage transformative learning. The realist philosophical paradigm attempts to accommodate quantitative and qualitative research methods. In the area of religion for example: “the result of the process was a major study that tapped into the richness of individual religious experience, along with a broader understanding of religious behaviors and knowledge levels across large groups of young people” (p.758). As a whole, the realist paradigm has less limitations than each one separated.
Trustworthiness/Validity Considerations
According to the author, realist researchers reject the framework of validity that is commonly accepted in more quantitative research in the social sciences. Nevertheless, realist research inherently assumes that there is some reality that can be observed with greater or less accuracy or validity. In this sense, “rigor in qualitative data analysis is a necessary element for maximizing the potential for generating meaning” (Krauss, 2005, p.765). This rigor provides trustworthiness to the results.
Ethical Issues
Qualitative researchers can operate under different epistemological assumptions from quantitative researchers. Ethical issues can sometimes result in confusion and uncertainty among researchers. In qualitative research, as well as in quantitative research, researchers are expected to employ high standards of academic rigor, and to behave with honesty and integrity. Ethics can emerge from value conflicts. I think that being a ‘purist’ researcher looking only at one small portion of a reality that cannot be split or unitized “without losing the importance of the whole phenomenon brings an ethical issue to the research process” (Krauss 2005, p. 767).
Reflective Assessment
The concept of meaning making in research methods and how meaning is generated from qualitative data analysis are the most important contributions of this paper. This article discusses the philosophical differences between quantitative and qualitative research. Quantitative research is positivist, objective and scientific that can be accomplished by statistical software packages commonly used for quantitative data (descriptive data). Qualitative researchers operate under naturalist, constructivist, eclectic and subjective assumptions (researcher interpretation). Qualitative research is a highly intuitive activity that contributes greatly to the construction of meaning. As researchers, we should focus on “the significance of different levels of meaning such as worldviews or philosophies of life, and the importance of meaning as a critical element to human existence and learning” (Krauss, 2005, p.767). I think that the author makes a good point regarding the need to make use of multiple research methods to optimize the data collection and the analysis process; that is, a mixed approach increases both the breadth and width of data collection and data analysis. The author provides an excellent overview of the basic issues in attempting to work with both quantitative and qualitative research methods toward the goal of generating meaning. Using both methods together contributes to a better understanding of phenomenon. I think that, it is important to make use of multiple research methods because it means broader understanding of behaviors and knowledge levels across large groups of people. Different philosophical assumptions or theoretical paradigms about the nature of reality are essential to understanding the overall perspective from which the study is designed and carried out. Within this holistic approach, critical realism framework, both qualitative and quantitative methodologies together are appropriate toward the goal of generating meaning and understanding thinking, behavior and worldwide formation. Indeed, the heart of the quantitative-qualitative “debate” is philosophical, not methodological.
Article Review
2. Gephart, R. (1999, 11 14). Paradigms and Research Methods. Retrieved 05 21, 2009, from Academy of Management, Research Methods Division: http://division.aomonline.org/rm/1999_RMD_Forum_Paradigms_and_Research_Methods.htm
Gephart (1999) explains three prevailing paradigms or views of the world which are currently shaping research: Positivism, Post-positivism or Interpretivism and Critical Postmodernism. The author introduces the concept of each paradigm, describes the key features of each worldview or “forms of scholarship,” the nature of knowledge pursued, and the different means by which knowledge is produced and assessed within each paradigm or worldview. Very synthetically, the three strands of thinking and researching are different in the following way: Positivism assumes an objective world which scientific methods (for example, experimental and survey) can more or less readily represent and measure statistically, seeking to predict and explain causal relations among key variables. Post positivists or interpretivists assert that these methods “impose a view of the world on subjects;” their concern is the interplay between objective and subjective meanings. Critical postmodernists, on the other hand, argue that these imposed views or measures implicitly “support forms of scientific knowledge that explicitly reproduce capitalist structures and associated hierarchies of inequality” (p.5). In this sense, the goal is “social transformation involving the displacement of existing structures of domination” (p. 6). The author concludes saying that these paradigms or theories are somewhat “separate but not greatly distant from one another” (p.7). In general the goal of research is adequate reflection of peoples’ experience in making inquiries from one or more theoretical frameworks.
Problem/Issue and the Importance/Significance
The need to help readers understand some of the basic assumptions underlying forms of research present in the field: Positivism, Post-positivism or Interpretivism and Critical Postmodernism. The author attempts to make us understand the key features, the usefulness of each paradigm, and how the three paradigms can be interwoven into research.
Research Question(s)
What will be the epistemological similarities and differences between positivism, interpretivism, and critical postmodernism? What are the main assumptions, key ideas, theories, figures, goals, theories, and criteria, unit of analysis and research methods of each paradigm?
Sample and Sample Selection Process
Sample and sample selection varies according to specific paradigm. Positivism uses quantitative criteria; interpretivism and critical postmodernism apply qualitative criteria and therefore, are more intuitive and flexible; however, rigor and specific principles are essential for good research.
Data Collection Methods
In this article, the author uses grounded theory development and suggests the mixture of quantitative and qualitative paradigms. The author explains, however, that data collection and the research methods, goals, criteria and unit of analysis of each paradigm are different: (a) Positivist research uses experiments; questionnaires; secondary data analysis; quantitatively coded documents, Likert scaling, and structural equation modeling, Qualitative uses grounded theory testing, among others. (b) Interpretivism uses: ethnography, participant observation, interviews, conversational analysis, and grounded theory development; case studies, conversational and textual analysis, and expansion analysis. (c) Critical Postmodernism uses: field research, historical analysis, dialectical analysis.
Data Analysis Method
The author explains that the positivist research unit of analysis is the variable. The unit of analysis of interpretivism is the meaning. Critical theory-Postmodernism (PM) uses as unit of analysis: deconstruction, textual analysis.
Limitations/Delimitations/Assumptions
Among the limitations and delimitations of each paradigm are the following: Positivist research assumes as its goals uncovered truth and facts as quantitatively specified relations among variables. Interpretivism/Constructivism is a related approach, which is based on analysis and look for persuasion using ‘sensitizing’ concepts. Its goals are: describe meanings, understand members' definitions of the situation, and examine how objective realities are produced. Critical theory-Postmodernism (PM) aims to investigate/uncover hidden interests, enable more informed consciousness, displace ideology with scientific insights, and change.
Trustworthiness/Validity Considerations
Criteria of validity vary according to each paradigm; the author explains that positivist research, in place of prediction uses explanation, rigor, internal and external validity and reliability. Interpretivism uses the trustworthiness and authenticity. Critical theory-Postmodernism (PM) uses theoretical consistency, historical insights, transcendent interpretations, basis for action, change potential and mobilization. Its unit of analysis are: contradictions, ‘incidents of exploitation’ and the sign. In general, realist researchers reject the framework of validity that is commonly accepted in just one method of research.
Ethical Issues
Currently there is a reexamination of ethical standards to protect better the rights of research participants within each paradigm. Among principles are: voluntary participation, informed consent, confidentiality, anonymity, and prevention of risk of harm.
Reflective Assessment
Gephart (2005) provides an excellent overview of the three important and prevailing paradigms, views of the world, or philosophies of research, namely, Positivism, Post positivism or Interpretivism, and Critical Postmodernism. I think that we might consider that each school is eclectic in the sense that they choose the best from all sources. I also think that the three strands are different but they can be interwoven, integrated or mixed into the research with the purpose of improving the process of inquiry and its results.
I regard the mixed paradigm as a more reasonable strategy for research. Why? Because I think that by understanding the main differences, and weakness of each of the current leading theories, and by demonstrating that each one, separately, provides just one view of the world, the promise of impartiality of each one might be criticized as illusory. An integrated perspective is a strategic open-mindedness approach, which can tolerate all different theories and points of view coming from all different directions that helps us better understand research problems and their solutions. Positivism provides a partial view of reality (its inductive view of quantitative data). Interpretivism and critical studies are also a partial view of life; each one is subjective (a deductive view of qualitative data). Given the progressive changes evidenced in contemporary society, it is imperative a holistic understanding of phenomena (and view each problem from different angles or from an interdisciplinary view). In this sense, “toleration of others requires broken confidence in the finality of our own truth” (Tuck, 1988, p.21). So, I think is important synthesizing into a unity all of the best components in the various philosophical theories. My idea of being integrative is to retain the scientific commitment, but also to combine it with the assumptions of qualitative research.
Article Review
3. Mackenzie, N., & Sally, K. (2006). Research Dilemmas: Paradigms, methods and methodology. Retrieved 05 21,2009, from Issues in Educational Research: http://www.iier.org.au/iier16/mackenzie.html
Mckenzie and Knipe (2006) criticize the perceived dichotomy between qualitative and quantitative research methods in research textbooks and journal articles: “considerable literature supports the use of mixed methods” (p.1). The authors begin with a definition of leading paradigms in educational research, which are the following: positivist, post-positivist, interpretive/constructivist paradigm, transformative, and pragmatic paradigm. They discuss the language commonly associated with each major research paradigm. The focus is in the basic issues in attempting to work with mixed research methods and how “the research paradigm and methodology work together to form a research study” (p.1). To that end, they clarify the difference between methodology and method: “The most common definitions suggest that methodology is the overall approach to research linked to the paradigm or theoretical framework while the method refers to systematic modes, procedures or tools used for collection and analysis of data” (p.4). The authors also clarify the difference between paradigms, methodologies and the traditional “dichotomy” of quantitative and qualitative research methods and data collection tools. They conclude with a discussion and explanation of how to combine paradigms and methods.
Problem/Issue and the Importance/Significance
The paper examines the features of each paradigm and their main argument is to “demystify” the role of paradigms in research. It questions the “dichotomy” quantitative-qualitative as a way to teach research methodology. Research texts and university courses “can create confusion to undergraduate, graduate and early career researchers” (McKenzie and Knipe, 2006, p.2). It suggests teaching a combination of both, quantitative and qualitative methods of research, making use of the most valuable features of each. That is, “research methods in research texts and university courses should include mixed methods and should address the perceived dichotomy between qualitative and quantitative research methodology” (p.1).
Research Questions
Why qualitative and quantitative methods should be combined? How the research paradigm and methodology work together to form a research study? Is there a difference between methodology and methods? How to match paradigms and methods?
Sample/Sample selection process and Data Collection/Analysis Method
According McKenzie and Knipe (2006) each research paradigm, framework, or methodology applies different sample/sample selection process and data collection/analysis methods. Each overall framework or methodology of research is consistent with the definition of each paradigm and hold unique features, which are specific to their particular approach. For example, the positivist and postpositivist paradigm usually applies experimental, quasi-experimental, and correlational, among others. The interpretivist/ constructivist paradigm applies naturalistic, phenomenological, hermeneutic, interpretivist, ethnographic, multiple participant meanings, and social and historical construction. The transformative paradigm applies critical theory, neo-Marxist, feminist, critical race theory, Freirean, participatory or emancipatory, among others. The pragmatic paradigm applies among others, consequences of actions, problem-centered, pluralistic and political. In a mixed research paradigm the decision-making process does not necessarily follow a linear path, the process is more realistically cyclical, and the researcher can mix methods or can make changes as the research progresses.
Limitations/Delimitations/Assumptions
In this article, the authors assume that educational research should be taught as a mixed paradigm. They argue that a combination of both, paradigm and methods and tools is the right way to teach research methodology. Mixed method is itself a statement of what could be, rather than a groundbreaking notion, especially in the instance of educational research.
Trustworthiness/Validity Considerations
The rejection of reliability and validity in qualitative research has resulted in a shift for “ensuring rigor” from the researcher’s actions during the course of the research. Each researcher's theoretical orientation has implications for every decision made in the research process. This obviously has implications for trustworthiness/validity considerations. The emphasis on strategies that are implemented during the research process has been replaced by strategies for evaluating trustworthiness and utility, which are implemented once a study is completed.
Ethical Issues
According to the authors, many writers fail to adequately define research terminology and sometimes use terminology in a way that is not compatible in its intent, omitting significant concepts and leaving the reader with only part of the picture. In this sense, confusion can be created when authors use different terms with different meanings to discuss paradigms; for instance, methodology and methods are usually used interchangeably but they have different meanings. The authors conclude stating that mixed method, like all research approaches, needs to be viewed through a critical lens while at the same time recognizing as valid its contribution to the field of research.
Reflective Assessment
From philosophical perspective, showing how much reflection it takes to start up an investigation, the article discusses different types of research and the language associated with them. The terms qualitative and quantitative refer to the data collection methods, analysis and reporting modes, instead of the theoretical approach to the research, which is the methodology (the overall approach to research linked to the paradigm or theoretical framework, while the method refers to systematic modes, procedures or tools used for collection and analysis of data).
This article applies almost directly to our situation as young researchers and is very useful for us to distinguish the type of methodology to apply in our own research. It explains the strengths of each leading methodology or theory of research, which are the following: positivism and post-positivism, interpretivism/constructionism, transformative, pragmatism, and a mix-methods approach to research, which are excellent theoretical frameworks. In a mixed research paradigm the decision-making process does not necessarily follow a linear path, the process is more realistically cyclical, and the researcher can mix methods or can make changes as the research progresses. This view applies to constitutional interpretation. This past year I proposed the mix paradigm in constitutional interpretation in the Annual Graduate Conference in King’s College, University of London:
http://www.iglrc.com/2008website/sessions.html
I'm delighted to say that my paper on “Eclecticism in Constitutional Interpretation” was selected for publication in the book Law and Outsiders Norms, Processes and 'Othering' in the 21st Century, edited by Cian C. Murphy and Penny Green, (Oxford: Hart Publishing, 2009)
http://www.hartpub.co.uk/books/search.asp?s=Legal+Theory&st=0&cp=6
4. Article Review
Monte-Sano, Chauncey (2008) Qualities of Historical Writing Instruction: A Comparative Case Study of Two Teachers’ Practices. American Educational Research Journal, 45 (4), p. 1045-1079.
The author analyzed qualitative and quantitative differences of the practices of two high school teachers of U.S. History, in real-world History and writing instruction over time (seven months). The analysis included forty two students’ performances that resulted from these practices rather than researcher interventions. The author demonstrated that both, teachers and students need training in the work of “writing evidence-based historical essays that involves sifting through evidence and constructing an interpretation in writing” (Monte-Sano, 2008, p. 1046). The results show that the teacher that supported students’ development in writing evidence-based historical essays was more successful in improving students’ growth than the other. The author aims to help high school teachers of U.S. history become more acquainted of different qualities of instruction to help students to learn how to read, write and think historically. She explains that there are different qualities of instruction that support students’ growth in writing evidence-based historical essays. These qualities are the following: approaching history as evidence-based interpretation; reading historical texts and considering them as interpretations; supporting reading comprehension and historical thinking; asking students to develop interpretations and support them with evidence; and using direct instruction, guided practice, independent practice, and feedback to teach evidence-based writing. According to the author, “the act of writing alone is not sufficient for growth in evidence-based historical writing” (Monte-Sano, 2008, p.1045).
Problem/Issue and the Importance/Significance
Students and teachers tend to have difficulties integrating documentary evidence into written accounts of past events. The author proved that the above mentioned qualities of teaching would foster growth in evidence-based historical writing. This study is important because history educators still know little about the relationships between teaching and learning with regard to evidence-based writing and reasoning (Monte-Sano, 2008).
Research Question(s)
How do teachers prepare students to write evidence-based historical essays? What messages about history, evidence, and writing do teachers’ practices convey? What opportunities to think and write historically do these teachers provide? How do teachers think about their subject matter, students, and pedagogy? In what ways do teachers’ practices coincide with improvements in students’ evidence-based historical writing?
Sample and sample selection process
Two teachers were selected in two urban high schools in Northern California. One class period per teacher – selection was based on class size in U.S history course. A total of 42 students from these classes participated in pre and post assessments of their historical learning. Over 7 months the researcher identified patterns of growth (or lack thereof).
Data Collection Methods
Data were collected from four sources: interviews, observations, feedback, and classroom artifacts (assignments and materials). Interview questions asked teachers their view of students’ progress and needs, and the reasoning behind their instructional decisions. Observations focused on what students did during class, how the teacher represented history and what opportunities there were to learn evidence-based reasoning, argumentation and writing. Field notes and data summary charts where completed during and after every observation. Feedback included teachers’ oral assessment on homework and essays.
Data Analysis Method
The author used mixed methods in an embedded multiple-case design that included teacher and student data analysis. Regarding teacher data, she organized field notes and interview data chronologically, transcribed them into codes, and used memos to track key ideas, to highlight illustrative excerpts of class, and to note what to look for in future observations. Data showed the amount of time that each teacher devoted to a particular topic, the agreement in the number of assignments and the number of readings per topics and key components of assignments. With respect to student data, the author measured, through pre and post test instruments, how students composed arguments that recognize historical perspectives from multiple documents.
Limitations/Delimitations/Assumptions
The reduced number of teachers and students was one of the limitations. The researcher had to create a matrix of questions and possible answers and to ensure that both instruments were appropriate (in terms of age of participants). Each instrument presented several points of agreement between sources and so allowed for multiple responses to the questions. Each one asked a why question that prompts students to make a supporting argument explaining why an action was taken in the past.
Trustworthiness/Validity Considerations
The author created specific instruments to study historical reasoning and writing in history. In terms of content validity, the pre- and post-test instruments were consistent with the following variables: the notions of historical reasoning as analysis of evidence, the use of evidence to construct interpretations of the past, and communication of arguments in writing. According to the author, the strength of these instruments lied in their ecological validity (Monte-Sano, 2008, p. 1051). Even so, the author noticed that contextual changes over the course of multiple administrations of the tests can influence results. For example, the constraints of working in the classrooms led to certain agreement on the essay topics.
Ethical Issues
Comparing two teachers with different approaches (one teacher worked in groups, the other worked with lectures in which students listened to lectures and worked independently), “was not entirely fair” (Monte-Sano, 2008, p.1079); however, the author explains that comparison is instructive when considering how to develop students’ historical thinking and writing.
Reflective Assessment
The report is a comparative case study of teaching and it uses student performance as a backdrop for claims of teaching effectiveness. The target was to examine two teachers’ practices with regard to the learning outcome of writing evidence-based essays. The strength of the body of the article lies in four main aspects, (1) the notions of historical reasoning as analysis of evidence; use of evidence to construct interpretations of the past, and communication of arguments in writing, (2) the list of qualities of instruction that support students’ growth in writing evidence-based historical essays, (3) the list of questions that teachers of history can ask of research, and (4) the use of multiple research methods to optimize the data collection and the analysis process. The results show the usefulness of qualitative and quantitative comparisons of students’ work to determine how each class improves in writing evidence-based history essays.
Traditionally, teachers and students tend to view history as established fact (literal meaning of documents), not analysis or interpretation. Monte-Sano shows that there are creative ways that teachers implement approaches to history writing. This entails synthesizing and organizing information to suit the writer’s purposes; problem-based writing tasks, encouraging historical thinking, and transformation of knowledge already in the mind.
This article is an excellent example of how to work with both quantitative and qualitative research methods toward the goal of generating a new way to teach and learn History writing. To explore further, I read other articles and books, which expand the basic theme and I came to the conclusion that teachers that embark on such a study of History must feel the passion of teaching and be prepared to devote time and energy to the endeavor. In my future endeavor, I will be teaching Law History in Venezuela through Court cases, using high Court precedents. I think it affords creative alternatives such as reading historical texts and considering them as interpretations; asking students to develop interpretations and support them with evidence, etc.
1. Krauss, S., (2005) Research Paradigms and Meaning Making: A Primer.
The Qualitative Report, 10 (4), 758-770.
The paper Research Paradigms and Meaning Making: A Primer provides an introduction to some of the basic issues in attempting to work with both quantitative and qualitative research methods. It explains how qualitative data analysis can be used to organize and categorize different levels and forms of meaning. It argues that the heart of the quantitative vs. qualitative “debate” is philosophical, not methodological, and it offers an overview of the epistemological differences of quantitative and qualitative research methodologies. The article introduces the notion of meaning making in social sciences research and how it actually occurs through qualitative data analysis. It defines meaning as “the underlying motivation behind thoughts, actions and even the interpretation and application of knowledge” (Krauss, 2005, p. 763). The task of constructing meaning through qualitative data analysis is explained through a variety of perspectives and approaches.
Problem/Issue and the Importance/Significance
The focus is on the task of constructing meaning through qualitative data analysis. This paper is significant because it examines the concept of the philosophical realist paradigm and introduces the notion of “meaning making” in research methods and how meaning is generated from qualitative data analysis specifically. To that end, it explains epistemological differences between quantitative and qualitative research that allows us to understand phenomena and to get more realistic results. Some examples are also provided.
Research Question(s)
What will be the epistemological similarities and differences between quantitative and qualitative research paradigms? How can the realist philosophical paradigm accommodate both, quantitative and qualitative research paradigms? How meaning can be constructed and organized using a qualitative data analysis approach?
Sample and sample selection process
Qualitative data selection in qualitative research is intuitive to discover (not measure) potentially important insights. The author explains the need to make use of multiple research methods to optimize the data selection process, to increase both the breadth and width of data selection.
Data Collection Methods
Krauss (2005) found that qualitative data collection meaning is constructed on “a variety of levels of daily life through the exchange of ideas, interaction, and agreement between the researcher and the participants” (p. 764). The author supports his point of view through some examples from the literature, within social sciences, about how meaning can be constructed and organized using a qualitative data analysis approach (interpretivism). The author also cites a multi-year religiosity initiative as a case in which he was involved in conducting both qualitative and quantitative research to assess religiosity in the lives of young people.
Data Analysis Method
Krauss (2005) argues that qualitative data analyses in qualitative research are guided by a reflective paradigm in an attempt to acquire social knowledge. In this sense, according to the author, meaning is constructed in a variety of ways, that is, “through construction, the researcher is not a blank slate; rather s/he is an active participant in the process” (Krauss, 2005, p. 767). This means that, epistemologically, the researcher is engaged in the setting, participating in the act of ‘being with’ the respondents in their lives to generate meaning of them” (p. 769). In addition, developing themes and storylines featuring the words and experiences of participants themselves adds richness to the findings.
Limitations/Delimitations/Assumptions
Krauss (2005) explains that the realist paradigm has the unique goal of facilitating the meaning-making process, which is an important learning facilitator that has the power to encourage transformative learning. The realist philosophical paradigm attempts to accommodate quantitative and qualitative research methods. In the area of religion for example: “the result of the process was a major study that tapped into the richness of individual religious experience, along with a broader understanding of religious behaviors and knowledge levels across large groups of young people” (p.758). As a whole, the realist paradigm has less limitations than each one separated.
Trustworthiness/Validity Considerations
According to the author, realist researchers reject the framework of validity that is commonly accepted in more quantitative research in the social sciences. Nevertheless, realist research inherently assumes that there is some reality that can be observed with greater or less accuracy or validity. In this sense, “rigor in qualitative data analysis is a necessary element for maximizing the potential for generating meaning” (Krauss, 2005, p.765). This rigor provides trustworthiness to the results.
Ethical Issues
Qualitative researchers can operate under different epistemological assumptions from quantitative researchers. Ethical issues can sometimes result in confusion and uncertainty among researchers. In qualitative research, as well as in quantitative research, researchers are expected to employ high standards of academic rigor, and to behave with honesty and integrity. Ethics can emerge from value conflicts. I think that being a ‘purist’ researcher looking only at one small portion of a reality that cannot be split or unitized “without losing the importance of the whole phenomenon brings an ethical issue to the research process” (Krauss 2005, p. 767).
Reflective Assessment
The concept of meaning making in research methods and how meaning is generated from qualitative data analysis are the most important contributions of this paper. This article discusses the philosophical differences between quantitative and qualitative research. Quantitative research is positivist, objective and scientific that can be accomplished by statistical software packages commonly used for quantitative data (descriptive data). Qualitative researchers operate under naturalist, constructivist, eclectic and subjective assumptions (researcher interpretation). Qualitative research is a highly intuitive activity that contributes greatly to the construction of meaning. As researchers, we should focus on “the significance of different levels of meaning such as worldviews or philosophies of life, and the importance of meaning as a critical element to human existence and learning” (Krauss, 2005, p.767). I think that the author makes a good point regarding the need to make use of multiple research methods to optimize the data collection and the analysis process; that is, a mixed approach increases both the breadth and width of data collection and data analysis. The author provides an excellent overview of the basic issues in attempting to work with both quantitative and qualitative research methods toward the goal of generating meaning. Using both methods together contributes to a better understanding of phenomenon. I think that, it is important to make use of multiple research methods because it means broader understanding of behaviors and knowledge levels across large groups of people. Different philosophical assumptions or theoretical paradigms about the nature of reality are essential to understanding the overall perspective from which the study is designed and carried out. Within this holistic approach, critical realism framework, both qualitative and quantitative methodologies together are appropriate toward the goal of generating meaning and understanding thinking, behavior and worldwide formation. Indeed, the heart of the quantitative-qualitative “debate” is philosophical, not methodological.
Article Review
2. Gephart, R. (1999, 11 14). Paradigms and Research Methods. Retrieved 05 21, 2009, from Academy of Management, Research Methods Division: http://division.aomonline.org/rm/1999_RMD_Forum_Paradigms_and_Research_Methods.htm
Gephart (1999) explains three prevailing paradigms or views of the world which are currently shaping research: Positivism, Post-positivism or Interpretivism and Critical Postmodernism. The author introduces the concept of each paradigm, describes the key features of each worldview or “forms of scholarship,” the nature of knowledge pursued, and the different means by which knowledge is produced and assessed within each paradigm or worldview. Very synthetically, the three strands of thinking and researching are different in the following way: Positivism assumes an objective world which scientific methods (for example, experimental and survey) can more or less readily represent and measure statistically, seeking to predict and explain causal relations among key variables. Post positivists or interpretivists assert that these methods “impose a view of the world on subjects;” their concern is the interplay between objective and subjective meanings. Critical postmodernists, on the other hand, argue that these imposed views or measures implicitly “support forms of scientific knowledge that explicitly reproduce capitalist structures and associated hierarchies of inequality” (p.5). In this sense, the goal is “social transformation involving the displacement of existing structures of domination” (p. 6). The author concludes saying that these paradigms or theories are somewhat “separate but not greatly distant from one another” (p.7). In general the goal of research is adequate reflection of peoples’ experience in making inquiries from one or more theoretical frameworks.
Problem/Issue and the Importance/Significance
The need to help readers understand some of the basic assumptions underlying forms of research present in the field: Positivism, Post-positivism or Interpretivism and Critical Postmodernism. The author attempts to make us understand the key features, the usefulness of each paradigm, and how the three paradigms can be interwoven into research.
Research Question(s)
What will be the epistemological similarities and differences between positivism, interpretivism, and critical postmodernism? What are the main assumptions, key ideas, theories, figures, goals, theories, and criteria, unit of analysis and research methods of each paradigm?
Sample and Sample Selection Process
Sample and sample selection varies according to specific paradigm. Positivism uses quantitative criteria; interpretivism and critical postmodernism apply qualitative criteria and therefore, are more intuitive and flexible; however, rigor and specific principles are essential for good research.
Data Collection Methods
In this article, the author uses grounded theory development and suggests the mixture of quantitative and qualitative paradigms. The author explains, however, that data collection and the research methods, goals, criteria and unit of analysis of each paradigm are different: (a) Positivist research uses experiments; questionnaires; secondary data analysis; quantitatively coded documents, Likert scaling, and structural equation modeling, Qualitative uses grounded theory testing, among others. (b) Interpretivism uses: ethnography, participant observation, interviews, conversational analysis, and grounded theory development; case studies, conversational and textual analysis, and expansion analysis. (c) Critical Postmodernism uses: field research, historical analysis, dialectical analysis.
Data Analysis Method
The author explains that the positivist research unit of analysis is the variable. The unit of analysis of interpretivism is the meaning. Critical theory-Postmodernism (PM) uses as unit of analysis: deconstruction, textual analysis.
Limitations/Delimitations/Assumptions
Among the limitations and delimitations of each paradigm are the following: Positivist research assumes as its goals uncovered truth and facts as quantitatively specified relations among variables. Interpretivism/Constructivism is a related approach, which is based on analysis and look for persuasion using ‘sensitizing’ concepts. Its goals are: describe meanings, understand members' definitions of the situation, and examine how objective realities are produced. Critical theory-Postmodernism (PM) aims to investigate/uncover hidden interests, enable more informed consciousness, displace ideology with scientific insights, and change.
Trustworthiness/Validity Considerations
Criteria of validity vary according to each paradigm; the author explains that positivist research, in place of prediction uses explanation, rigor, internal and external validity and reliability. Interpretivism uses the trustworthiness and authenticity. Critical theory-Postmodernism (PM) uses theoretical consistency, historical insights, transcendent interpretations, basis for action, change potential and mobilization. Its unit of analysis are: contradictions, ‘incidents of exploitation’ and the sign. In general, realist researchers reject the framework of validity that is commonly accepted in just one method of research.
Ethical Issues
Currently there is a reexamination of ethical standards to protect better the rights of research participants within each paradigm. Among principles are: voluntary participation, informed consent, confidentiality, anonymity, and prevention of risk of harm.
Reflective Assessment
Gephart (2005) provides an excellent overview of the three important and prevailing paradigms, views of the world, or philosophies of research, namely, Positivism, Post positivism or Interpretivism, and Critical Postmodernism. I think that we might consider that each school is eclectic in the sense that they choose the best from all sources. I also think that the three strands are different but they can be interwoven, integrated or mixed into the research with the purpose of improving the process of inquiry and its results.
I regard the mixed paradigm as a more reasonable strategy for research. Why? Because I think that by understanding the main differences, and weakness of each of the current leading theories, and by demonstrating that each one, separately, provides just one view of the world, the promise of impartiality of each one might be criticized as illusory. An integrated perspective is a strategic open-mindedness approach, which can tolerate all different theories and points of view coming from all different directions that helps us better understand research problems and their solutions. Positivism provides a partial view of reality (its inductive view of quantitative data). Interpretivism and critical studies are also a partial view of life; each one is subjective (a deductive view of qualitative data). Given the progressive changes evidenced in contemporary society, it is imperative a holistic understanding of phenomena (and view each problem from different angles or from an interdisciplinary view). In this sense, “toleration of others requires broken confidence in the finality of our own truth” (Tuck, 1988, p.21). So, I think is important synthesizing into a unity all of the best components in the various philosophical theories. My idea of being integrative is to retain the scientific commitment, but also to combine it with the assumptions of qualitative research.
Article Review
3. Mackenzie, N., & Sally, K. (2006). Research Dilemmas: Paradigms, methods and methodology. Retrieved 05 21,2009, from Issues in Educational Research: http://www.iier.org.au/iier16/mackenzie.html
Mckenzie and Knipe (2006) criticize the perceived dichotomy between qualitative and quantitative research methods in research textbooks and journal articles: “considerable literature supports the use of mixed methods” (p.1). The authors begin with a definition of leading paradigms in educational research, which are the following: positivist, post-positivist, interpretive/constructivist paradigm, transformative, and pragmatic paradigm. They discuss the language commonly associated with each major research paradigm. The focus is in the basic issues in attempting to work with mixed research methods and how “the research paradigm and methodology work together to form a research study” (p.1). To that end, they clarify the difference between methodology and method: “The most common definitions suggest that methodology is the overall approach to research linked to the paradigm or theoretical framework while the method refers to systematic modes, procedures or tools used for collection and analysis of data” (p.4). The authors also clarify the difference between paradigms, methodologies and the traditional “dichotomy” of quantitative and qualitative research methods and data collection tools. They conclude with a discussion and explanation of how to combine paradigms and methods.
Problem/Issue and the Importance/Significance
The paper examines the features of each paradigm and their main argument is to “demystify” the role of paradigms in research. It questions the “dichotomy” quantitative-qualitative as a way to teach research methodology. Research texts and university courses “can create confusion to undergraduate, graduate and early career researchers” (McKenzie and Knipe, 2006, p.2). It suggests teaching a combination of both, quantitative and qualitative methods of research, making use of the most valuable features of each. That is, “research methods in research texts and university courses should include mixed methods and should address the perceived dichotomy between qualitative and quantitative research methodology” (p.1).
Research Questions
Why qualitative and quantitative methods should be combined? How the research paradigm and methodology work together to form a research study? Is there a difference between methodology and methods? How to match paradigms and methods?
Sample/Sample selection process and Data Collection/Analysis Method
According McKenzie and Knipe (2006) each research paradigm, framework, or methodology applies different sample/sample selection process and data collection/analysis methods. Each overall framework or methodology of research is consistent with the definition of each paradigm and hold unique features, which are specific to their particular approach. For example, the positivist and postpositivist paradigm usually applies experimental, quasi-experimental, and correlational, among others. The interpretivist/ constructivist paradigm applies naturalistic, phenomenological, hermeneutic, interpretivist, ethnographic, multiple participant meanings, and social and historical construction. The transformative paradigm applies critical theory, neo-Marxist, feminist, critical race theory, Freirean, participatory or emancipatory, among others. The pragmatic paradigm applies among others, consequences of actions, problem-centered, pluralistic and political. In a mixed research paradigm the decision-making process does not necessarily follow a linear path, the process is more realistically cyclical, and the researcher can mix methods or can make changes as the research progresses.
Limitations/Delimitations/Assumptions
In this article, the authors assume that educational research should be taught as a mixed paradigm. They argue that a combination of both, paradigm and methods and tools is the right way to teach research methodology. Mixed method is itself a statement of what could be, rather than a groundbreaking notion, especially in the instance of educational research.
Trustworthiness/Validity Considerations
The rejection of reliability and validity in qualitative research has resulted in a shift for “ensuring rigor” from the researcher’s actions during the course of the research. Each researcher's theoretical orientation has implications for every decision made in the research process. This obviously has implications for trustworthiness/validity considerations. The emphasis on strategies that are implemented during the research process has been replaced by strategies for evaluating trustworthiness and utility, which are implemented once a study is completed.
Ethical Issues
According to the authors, many writers fail to adequately define research terminology and sometimes use terminology in a way that is not compatible in its intent, omitting significant concepts and leaving the reader with only part of the picture. In this sense, confusion can be created when authors use different terms with different meanings to discuss paradigms; for instance, methodology and methods are usually used interchangeably but they have different meanings. The authors conclude stating that mixed method, like all research approaches, needs to be viewed through a critical lens while at the same time recognizing as valid its contribution to the field of research.
Reflective Assessment
From philosophical perspective, showing how much reflection it takes to start up an investigation, the article discusses different types of research and the language associated with them. The terms qualitative and quantitative refer to the data collection methods, analysis and reporting modes, instead of the theoretical approach to the research, which is the methodology (the overall approach to research linked to the paradigm or theoretical framework, while the method refers to systematic modes, procedures or tools used for collection and analysis of data).
This article applies almost directly to our situation as young researchers and is very useful for us to distinguish the type of methodology to apply in our own research. It explains the strengths of each leading methodology or theory of research, which are the following: positivism and post-positivism, interpretivism/constructionism, transformative, pragmatism, and a mix-methods approach to research, which are excellent theoretical frameworks. In a mixed research paradigm the decision-making process does not necessarily follow a linear path, the process is more realistically cyclical, and the researcher can mix methods or can make changes as the research progresses. This view applies to constitutional interpretation. This past year I proposed the mix paradigm in constitutional interpretation in the Annual Graduate Conference in King’s College, University of London:
http://www.iglrc.com/2008website/sessions.html
I'm delighted to say that my paper on “Eclecticism in Constitutional Interpretation” was selected for publication in the book Law and Outsiders Norms, Processes and 'Othering' in the 21st Century, edited by Cian C. Murphy and Penny Green, (Oxford: Hart Publishing, 2009)
http://www.hartpub.co.uk/books/search.asp?s=Legal+Theory&st=0&cp=6
4. Article Review
Monte-Sano, Chauncey (2008) Qualities of Historical Writing Instruction: A Comparative Case Study of Two Teachers’ Practices. American Educational Research Journal, 45 (4), p. 1045-1079.
The author analyzed qualitative and quantitative differences of the practices of two high school teachers of U.S. History, in real-world History and writing instruction over time (seven months). The analysis included forty two students’ performances that resulted from these practices rather than researcher interventions. The author demonstrated that both, teachers and students need training in the work of “writing evidence-based historical essays that involves sifting through evidence and constructing an interpretation in writing” (Monte-Sano, 2008, p. 1046). The results show that the teacher that supported students’ development in writing evidence-based historical essays was more successful in improving students’ growth than the other. The author aims to help high school teachers of U.S. history become more acquainted of different qualities of instruction to help students to learn how to read, write and think historically. She explains that there are different qualities of instruction that support students’ growth in writing evidence-based historical essays. These qualities are the following: approaching history as evidence-based interpretation; reading historical texts and considering them as interpretations; supporting reading comprehension and historical thinking; asking students to develop interpretations and support them with evidence; and using direct instruction, guided practice, independent practice, and feedback to teach evidence-based writing. According to the author, “the act of writing alone is not sufficient for growth in evidence-based historical writing” (Monte-Sano, 2008, p.1045).
Problem/Issue and the Importance/Significance
Students and teachers tend to have difficulties integrating documentary evidence into written accounts of past events. The author proved that the above mentioned qualities of teaching would foster growth in evidence-based historical writing. This study is important because history educators still know little about the relationships between teaching and learning with regard to evidence-based writing and reasoning (Monte-Sano, 2008).
Research Question(s)
How do teachers prepare students to write evidence-based historical essays? What messages about history, evidence, and writing do teachers’ practices convey? What opportunities to think and write historically do these teachers provide? How do teachers think about their subject matter, students, and pedagogy? In what ways do teachers’ practices coincide with improvements in students’ evidence-based historical writing?
Sample and sample selection process
Two teachers were selected in two urban high schools in Northern California. One class period per teacher – selection was based on class size in U.S history course. A total of 42 students from these classes participated in pre and post assessments of their historical learning. Over 7 months the researcher identified patterns of growth (or lack thereof).
Data Collection Methods
Data were collected from four sources: interviews, observations, feedback, and classroom artifacts (assignments and materials). Interview questions asked teachers their view of students’ progress and needs, and the reasoning behind their instructional decisions. Observations focused on what students did during class, how the teacher represented history and what opportunities there were to learn evidence-based reasoning, argumentation and writing. Field notes and data summary charts where completed during and after every observation. Feedback included teachers’ oral assessment on homework and essays.
Data Analysis Method
The author used mixed methods in an embedded multiple-case design that included teacher and student data analysis. Regarding teacher data, she organized field notes and interview data chronologically, transcribed them into codes, and used memos to track key ideas, to highlight illustrative excerpts of class, and to note what to look for in future observations. Data showed the amount of time that each teacher devoted to a particular topic, the agreement in the number of assignments and the number of readings per topics and key components of assignments. With respect to student data, the author measured, through pre and post test instruments, how students composed arguments that recognize historical perspectives from multiple documents.
Limitations/Delimitations/Assumptions
The reduced number of teachers and students was one of the limitations. The researcher had to create a matrix of questions and possible answers and to ensure that both instruments were appropriate (in terms of age of participants). Each instrument presented several points of agreement between sources and so allowed for multiple responses to the questions. Each one asked a why question that prompts students to make a supporting argument explaining why an action was taken in the past.
Trustworthiness/Validity Considerations
The author created specific instruments to study historical reasoning and writing in history. In terms of content validity, the pre- and post-test instruments were consistent with the following variables: the notions of historical reasoning as analysis of evidence, the use of evidence to construct interpretations of the past, and communication of arguments in writing. According to the author, the strength of these instruments lied in their ecological validity (Monte-Sano, 2008, p. 1051). Even so, the author noticed that contextual changes over the course of multiple administrations of the tests can influence results. For example, the constraints of working in the classrooms led to certain agreement on the essay topics.
Ethical Issues
Comparing two teachers with different approaches (one teacher worked in groups, the other worked with lectures in which students listened to lectures and worked independently), “was not entirely fair” (Monte-Sano, 2008, p.1079); however, the author explains that comparison is instructive when considering how to develop students’ historical thinking and writing.
Reflective Assessment
The report is a comparative case study of teaching and it uses student performance as a backdrop for claims of teaching effectiveness. The target was to examine two teachers’ practices with regard to the learning outcome of writing evidence-based essays. The strength of the body of the article lies in four main aspects, (1) the notions of historical reasoning as analysis of evidence; use of evidence to construct interpretations of the past, and communication of arguments in writing, (2) the list of qualities of instruction that support students’ growth in writing evidence-based historical essays, (3) the list of questions that teachers of history can ask of research, and (4) the use of multiple research methods to optimize the data collection and the analysis process. The results show the usefulness of qualitative and quantitative comparisons of students’ work to determine how each class improves in writing evidence-based history essays.
Traditionally, teachers and students tend to view history as established fact (literal meaning of documents), not analysis or interpretation. Monte-Sano shows that there are creative ways that teachers implement approaches to history writing. This entails synthesizing and organizing information to suit the writer’s purposes; problem-based writing tasks, encouraging historical thinking, and transformation of knowledge already in the mind.
This article is an excellent example of how to work with both quantitative and qualitative research methods toward the goal of generating a new way to teach and learn History writing. To explore further, I read other articles and books, which expand the basic theme and I came to the conclusion that teachers that embark on such a study of History must feel the passion of teaching and be prepared to devote time and energy to the endeavor. In my future endeavor, I will be teaching Law History in Venezuela through Court cases, using high Court precedents. I think it affords creative alternatives such as reading historical texts and considering them as interpretations; asking students to develop interpretations and support them with evidence, etc.
Subscribe to:
Posts (Atom)
