Saturday 26 September 2009

ECUR 809 Assignment # 2 - Model

Assignment # 2 Model or approach to evaluate the ECS Programming for Children with Severe Disabilities

Summary: Children with severe/profound disabilities are eligible for Program Unit Funding from Alberta Education. According to the Medicine Hat Catholic organization, the “ECS Programming for Children with Severe Disabilities” evaluates and selects eligible children and then it offers educational programs that must meet the individual child’s needs. The educational programming combines center-based programs and in-home programs. The teacher develops an Individual Program Plan with goals and objectives reflective of the child’s needs. The center-based programming takes place in settings such as preschools, kindergartens and day cares.

What approach is appropriate to evaluate this program? In order to effectively evaluate ECS programming, I suggest using qualitative methods to conduct a “naturalistic” evaluation model in light of Emil J. Posavac and Raymond G. Carey’s theory (2003), combined with the “participant-oriented evaluation” approach, wisely described by Jody L. Fitzpatrick, James R. Sanders and Blaine R. Worthen (2004).

In the naturalistic evaluation model, “the evaluator becomes the data gathering instrument, not surveys or records. By personally observing all phases of the program and holding detailed conversation with stakeholders, the evaluator seeks to gain a rich understanding of the program, its clients, and the social environment” (Posavac and Carey, 2004, p.28). In other words, personal observations and detailed reports are necessary to explain information about the home visits, which should be carefully planned and documented. Also this model is useful in explaining the child’s instruction in a classroom setting at a center or school. The steps in preparing to conduct an evaluation comprise: identifying the program and its stakeholders, becoming familiar with information needs, the planning evaluation and evaluating the evaluation itself.

In the participant-oriented evaluation approach, evaluators should not be distracted from what was really happening to the participant in the program by focusing only in “stating and classifying objectives, designing an elaborate evaluation system, developing technically defensible objective instrumentation, and preparing long detailed technical reports” (Fitzpatrick, Sanders and Worthen, 2004, p. 130).The participant-oriented evaluation stresses first hand experience with program activities and settings and involvement of program participants in evaluation. This approach is “aimed at observing and identifying all (or as many as possible) of the concerns, issues, and consequences integral to the human services enterprise” (p.131). Evaluators need to avoid focusing only on the results or on isolated comments or numbers, charts, figures, and tables, missing important individual facts.

In short, a “naturalist & participant-oriented evaluation” combined approach will provide the plurality of judgments and criteria or methods of inquiry that will help the evaluators portray the different values and needs of individuals and groups served by the educational programming. This model requires active involvement of participants. By involving participants in determining the criteria and boundaries of the evaluation, evaluators serve an important educative function by creating “better-informed” program participants.

Sources:

Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2004). Program evaluation: Alternative approaches and practical guidelines. White Plains, NY: Longman.

Posovac, E., & Carey, R. (2003). Program Evaluation:Methods and Case Studies. (6th edition). New Jersey: Prentice Hall

Medicine Hat Catholic, Separate Regional Division # 20.
Retrieved Sept 13th, 2009 from
http://www.mhcbe.ab.ca/cec/

Wednesday 23 September 2009

A Revised Version of Assignment # 1

Assignment #1 A completed evaluation Case: Client Satisfaction Survey Analysis, Quebec Regional Office, Human Resources Development (HRDC) in Quebec, Canada, 2001.

The study is an analysis of requirements and proposal for a client satisfaction measurement program. Circum Network Inc. is listed as the author of this study. It was prepared for Evaluation Services Information and Strategic Planning Directorate Quebec Regional Office Human Resources Development.

Model or process used in the evaluation: An improvement-focused model (Posovac, 2003, p.29). While providing the essential methodological foundations, “the self-directed” training document takes a pragmatic and “integrated” approach to conducting client satisfaction surveys. It includes devices such as decision trees and checklists. The project was carried out in three phases. First, the team developed the standardized questionnaires. Then the researchers developed an operational framework for the client satisfaction measurement program. Finally, they developed the analytical support tools and mechanisms.According to the report, there is a consensus within the HRDC-Quebec Region that systematic, rigorous measurement of client satisfaction with the products and services offered by the Region is essential to building the fifth pillar of the region’s vision: delivering services of the highest quality. There is also a consensus that the primary responsibility for improving the region’s services lies with the HRCCs and other operational centres, because it is they that control the daily delivery of services.

Strengths
: (1) The goals were to plan a client satisfaction measurement program and an analysis of requirements and proposal for a client satisfaction measurement program; the team produced a self-directed training document on the implementation of client surveys. (2) The report presents clearly the development of standardized questionnaires, operational and implementation framework and pre-testing. (3) The report is a Guide for human resources development (employees who do not necessarily have the knowledge required to conduct formal, systematic surveys). The tools offered in the guide are to be used for measuring, sampling, collecting data, analyzing data, interpreting results, and implementing recommendations. The results included standardized questionnaires for various types of clients and various services’ conditions. (4) Researchers/evaluators helped programs’ staff to learn how to discover discrepancies between program objectives and the needs of the target population, between program implementation and program plans, between expectations of the target population and the services actually delivered, or between outcomes achieved and outcomes projected (Posovac, 2003, p.29).
In this sense, I think in this case, it was used an approach similar to Stake's countenance model explained by Jay Wilson (2009) because there was a need for formalized evaluation. It was not just anecdotal but descriptive data was necessary. It included description and judgment, intents, and observations, which were compared to standards then a judgment was made. In short, there was a "mix" or mixture of parts pieces (quantitative/qualitative elements) and, as a result, it was an “artistic” evaluation (creative thinking in the minds of evaluators).
Regarding possible weaknesses, I see a couple of things that could be considered in future program evaluations: (1) the program does not compare or discusses all sides of program evaluation both positive and negative. Deliberation of pros and cons are not evident in the survey evaluation of the program. It would be useful if this discussion or deliberative process can take place. (2) It does not describe participant’s “reactions” to and “learning” from the innovative program evaluation, as well as “behavior” changes in real job performance, and other potpourri “results.” A round table or discussion of these issues could help to enlightening program evaluation.

Sources
:
Circum Network Inc. (2001) An integrated approach to conducting client satisfaction surveys analysis of requirements and proposal for a client satisfaction measurement program. Prepared for Evaluation Services’ Information and Strategic Planning Directorate Quebec Regional Office Human Resources Development Canada.
Retrieved September 7, 2009 from:
http://www.circum.com/textes/program_hrdc_quebec_2001.pdf

David Crawford (2009) Evaluation exploration. Retrieved September 4, 2009, from http://www.ag.ohio-state.edu/~brick/evalexpl.htm

Miller, R., & Butler, J. (2008) Using an adversary hearing to evaluate the effectiveness of a military program. The Qualitative Report, 13(1), 12-25. Retrieved September 5, 2009 from http://www.nova.edu/ssss/QR/QR13-1/miller.pdf

Posovac, E., & Carey, R. (2003). Program Evaluation: Methods and Case Studies. (6th edition). New Jersey: Prentice Hall.

Wednesday 9 September 2009

ECUR 809 Assignment #1 (First Version)

ECUR 809.3-83551 Assignment #1 Case: Client Satisfaction Survey Analysis, Quebec Regional Office, Human Resources Development (HRDC) in Quebec, Canada, 2001.

Summary: Circum Network Inc. is listed as the author of this study. The program evaluation was prepared and developed for Evaluation Services Information and Strategic Planning Directorate Quebec Regional Office Human Resources Development. It is an analysis of requirements and a proposal for a client satisfaction measurement program. It is a guide for human resources development (employees who do not necessarily have the knowledge required to conduct formal, systematic surveys), that is, a self-directed training document on the implementation of client surveys. The tools offered in the guide are to be used for measuring, sampling, collecting data, analyzing data, interpreting results, and implementing recommendations. The document presents clearly the development of standardized questionnaires for various types of clients and various services’ conditions, operational and implementation framework and pre-testing.

The model or process used in the evaluation is the improvement-focused model.
The purpose is to help programs’ staff to learn how to discover discrepancies between program objectives and the needs of the target population, between program implementation and program plans, between expectations of the target population and the services actually delivered, and between outcomes achieved and outcomes projected, among others (Posovac, 2003, p.29). While providing the essential methodological foundations, “the self-directed” training document takes a pragmatic and “integrated” approach to conducting client satisfaction surveys. It includes devices such as decision trees and checklists. The project was carried out in three phases comprising the development of (1) the standardized questionnaires, (2) the operational framework for the client satisfaction measurement program, and (3) the analytical support tools and mechanisms. According to the report, there is a consensus within the HRDC-Quebec Region that systematic, rigorous measurement of client satisfaction with the products and services offered by the Region is essential to building the fifth pillar of the region’s vision: delivering services of the highest quality. There is also a consensus that the primary responsibility for improving the region’s services lies with the HRCCs and other operational centers, because it is they that control the daily delivery of services.

I see a couple of things that could be considered in future program evaluations: (1) the program does not compare or discusses all sides of program evaluation both, positive and negative. Deliberation of pros and cons are not evident in the survey evaluation of the program. It would be useful if this discussion or deliberative process can take place. (2) It does not describe participant’s “reactions” to and “learning” from the innovative program evaluation, as well as “behavior” changes in real job performance, and other potpourri “results.” A round table or discussion of these issues could help to improve the process of program evaluation. Nelson

Sources:
Circum Network Inc. (2001) An integrated approach to conducting client satisfaction surveys analysis of requirements and proposal for a client satisfaction measurement program. Prepared for Evaluation Services' Information and Strategic Planning Directorate Quebec Regional Office Human Resources Development Canada. Retrieved September 7, 2009 from:
http://www.circum.com/textes/program_hrdc_quebec_2001.pdf

David Crawford (2009) Evaluation exploration. Retrieved September 4, 2009, from http://www.ag.ohio-state.edu/~brick/evalexpl.htm

Miller, R., & Butler, J. (2008) Using an adversary hearing to evaluate the effectiveness of a military program. The Qualitative Report, 13(1), 12-25. Retrieved September 5, 2009 from http://www.nova.edu/ssss/QR/QR13-1/miller.pdf

Posovac, E., & Carey, R. (2003). Program Evaluation: Methods and Case Studies. (6th edition). New Jersey: Prentice Hall.

Please see above a more complete version of my Assignment # 1 (after Dr. Wilson's comments, Sept 25th, 2009)

Wednesday 2 September 2009

What is Program Evaluation?

First Session:
To define and understand “What is program evaluation?”
To understand the historical foundations of program evaluation.
To identify and develop appropriate evaluation assessment techniques used in educational and other program settings.

"Program evaluation is the systematic collection of information for use to improve effectiveness and make decisions with regard to what those programs are doing and affecting." University of Minnesota http://www.evaluation.umn.edu/

"Essentially you are trying to answer the question, "Does the program do what it says it does?". Because evaluation is on-going your evaluation may steer your client in a particular direction and it will also be used to inform the next evaluation" (Jay Wilson, 2009)

I found the following useful links:
http://www.epa.gov/evaluate/whatis.htm
In its broadest definition, Program Evaluation is a systematic way to learn from past experience by assessing how well a program is working.
- Program evaluation is almost always retrospective, i.e., examining and learning from experience, though it may include prospective elements. For example, an analytical study that makes use of data on past performance to estimate future results would be an evaluation, but one done prospectively to estimate the effectiveness of a new environmental program based on assumptions about its design and/or operation would not be.
- An evaluation can be systematic without being elaborate or expensive. It’s possible to keep it simple and affordable

http://www.epa.gov/evaluate/whatis.pdf

http://www.ocde.k12.ca.us/downloads/assessment/WHAT_IS_Program_Evaluation.pdf

http://www.en.wikipedia.org/wiki/Program_evaluation
Program evaluation is a systematic method for collecting, analyzing, and using information to answer basic questions about projects, policies and programs. Program evaluation is used in the public and private sector and is taught in numerous universities.. Program evaluations can involve quantitative methods of social research or qualitative methods or both. People who do program evaluation come from many different backgrounds: sociology, psychology, economics, social work.

http://gsociology.icaap.org/methods/evaluationbeginnersguide.pdf

http://non-profit-governance.suite101.com/article.cfm/board_member_selfassessment
umass.edu/oapa/oapa/publications/online_handbooks/program_based.pdf
http://www.macleans.ca/education/universities/article.jsp?content=20070323_155000_816
http://www.medicine.usask.ca/pt/general-information/school-of-physical-therapy-operations-manual-1/Program%20Evaluation.pdf/view
http://www.evaluationcanada.ca/site.cgi?s=4&ss=2&_lang=an
http://www.uwex.edu/ces/pdande/evaluation/index.html

Requested Readings:
http://www.managementhelp.org/evaluatn/fnl_eval.htm
http://pathwayscourses.samhsa.gov/eval101/eval101_toc.htm
http://jan.ucc.nau.edu/edtech/etc667/proposal/evaluation/summative_vs._formative.htm
http://delicious.com/wi11y0/809
http://www.evaluationcanada.ca/site.cgi?s=1
http://www.eval.org/

Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R.(2004). Program evaluation: Alternative approaches and practical guidelines. White Plains, NY: Longman.
Owen, J. M., & Rogers, P. J. (1999). Program evaluation: Forms and approaches. Thousand Oaks, CA: Sage.
Posovac, E., & Carey, R. (2003). Program Evaluation รข€“ Methods and Case Studies. (6th edition). New Jersey: Prentice Hall

First assignment: a description of how to do program evaluation in Canada
http://www.spcottawa.on.ca/CBRNO_website/How2program_evaluation.htm#Client
http://www.mhcbe.ab.ca/cec/specialeducation-studentservices/ECSPROGRAMMINGFORCHILDRENWITHSEVEREDISABILITIES.pdf

PENDING ASSIGNMENTS
Assignment # 5 Design and test a short survey. Include a variety of question types such as scale rating, short answer, and open-ended. Original version and the modified version based on the testing of the survey with four individuals. Deadline: November 20th

Major Assignment: Project
Evaluation Plan (Proposal) Dec. 11 - 50 marks
A Proposed Evaluation of the “Spanish Intermediate & Conversational and Ultra Play - Tennis After School” Programs in a Community Center: A Case Study of the City of Ottawa: Parks & Recreation Master Plan Experience
By Nelson Dordelly-Rosales
The purpose of this paper will be to design an evaluation plan after completion the “Ultra Play - After School” Program in Sandy Hill Community Center: A Case Study of the City of Ottawa – Parks & Recreation Experience. The plan will be a theoretical paper that outlines the program and the goals or objectives to be evaluated. It will demonstrate my ability to analyze a program, integrate the different tools and theories addressed recently into an evaluation plan, determine a suitable evaluation plan and create the instruments I would use to conduct the analysis. Essentially the purpose of this evaluation plan is to convince Mr. Martin Travis, Coordinator of Parks and Recreation that I should be “the evaluator for the evaluation.” Hence, I want to convince him that I am the “best” to perform the evaluation of the above mentioned program, and that I have the best team to help me on this matter.
Through case study, this paper will lend insight to ways through a program-based evaluation, or a logic “improvement-focused” model (Posavac, et. al, 2003) can facilitate a “holistic” approach to the evaluation of ‘Ultra Play-After School program.’ So, an important piece of this evaluation plan is to describe, or elaborate upon, different reasons for selecting this particular model and approach. While the term “program” is used, I find a logic model equally useful for describing group work, team work, community-based collaborative and other complex organizational processes as I seek to promote results-based performance. For presentation of paper, I will use a case study format that includes following components:
• Abstract – a brief summary of the major points of the study as well as a short list of key words.
• Introduction
Prior evaluations or policies in a Community in Ottawa:
http://www.gnag.ca/index.php
http://www.gnag.ca/index.php?page=67&id=20

In comparison to:
http://www.topeka.org/parksrec/garfield.shtml

http://www.uml.edu/centers/CFWC/Community_Tips/Program_Evaluation/Program_Evaluation.html

http://www.skaneatelescommunitycenter.com/index.php?option=com_facileforms&Itemid=124


A case in Florida Miami Beach:
http://www.shakealegmiami.org/site/c.kkLUJbMQKpH/b.2521629/k.BF03/Home.htm

Program or outcome evaluation assesses the extent to which planned activities produce the desired outcome among a target population. Evaluation is considered and set up when the project is designed. Program or outcome evaluation assesses the extent to which planned activities produce the desired outcome among a target population. The PE model is led by the Planning and Evaluation Committee consisting of key representatives from the collaborating organizations Planning, Development and Communications is the staff support unit charged with assisting the organization in its efforts to improve its ability to become a self-correcting organization through planning, monitoring and evaluation efforts.
http://www.ccpfc.org/rd/eval_center.cfm

Importance: http://www.cdc.gov/eval/framework.htm

Examples:
www.phoenix.gov/ARTS/eval0405.pdf

http://www.afterschoolflorida.hhp.ufl.edu/evaluation_links.html

Models of Evaluation:
http://www.edtech.vt.edu/edtech/id/eval/eval_models.html

o Defining and addressing the need to change program evaluation methods that rely heavily on data gathering by postal mail to online instruments.
o What do you propose to do?
o What is my plan?
o What are my objectives: INPUT
o Involving stakeholders
• Model & Method: OUTPUT
o Description - How to do it?
o Evaluation Matrix
o Reasons for selecting particular foci and approaches
o What are the challenges and or roadblocks?
o What are the assessment instruments to be used?
o Measurement considerations & Data collection
• Evaluation: OUTCOME
o How well can we meet my objectives?
o Why I and my team are the best to perform the evaluation
o Emerging trends
• IMPACT:Conclusion, Summary and Recommendations
o Summarize what I learned from this experience.
o What would I recommend to others who would like to replicate my efforts?
o What would they need to be prepared for?
o What needs to be improved?
o Strengths and weaknesses
o Other issues