Friday, 16 March 2012

Assignment 5: Sleep and Daily Functioning Survey

Since I do not plan to have a survey as part of my program evaluation, I chose to construct a survey based on a topic that I find very interesting: sleep.
The original survey was constructed and sent out to 4 participants. Upon receiving the results of the survey from these participants, changes were made to the survey.
These links are to the original survey and the revised survey.
I chose to make page breaks in my survey, and to make some questions required, because I did not want participants to be swayed in their answers by any of the proceeding questions.
Below I have listed questions that I changed and why I made those changes:
Introductory Statement: I included the statement that this survey was ‘for adults’ to avoid confusion when the participants reached the second question and there were only age brackets for people who are 18 years and older.
Question 3: I included in this question ‘during the day’ so that participants would be clearer about what specifically I was looking for.
Question 4: In this question I included ‘please note this does not include what you would consider a nap’. The reasoning for this was that I am interested in sleeping occasions that are not naps. I chose to keep the question as ‘sleeping occasions’ instead of ‘at night’ because many people do shift work and do not necessarily sleep at night.
Question 5: Same as question 4.
Question 6 (now question 6-8): I chose to change ‘daily functioning’ to ‘mood, mind, body’ because I am interested in how a good night’s sleep affects all of these areas. With the original question, participants only mentioned one or two of these factors, and by explicitly asking for them, results will be more comprehensive.
Question 7 (now question 9): Same as question 4.
Question 8 (now question 10-12): I chose to change the question to three questions because the original question was loaded and was actually three questions in disguise as one. The results from the original survey indicated that I should make this question three separate questions so that participants are more inclined to answer all parts of the question. By doing this, the data will be more comprehensive and will be in line with the information I am seeking.
New Question (now question 14): I added this question based on the results from the last question in the original survey. This was not an aspect of sleep that I had originally considered but thought it would provide valuable information as a separate question.

Saturday, 11 February 2012

Assignment 4 Logic Model of Crisis Management Service

Crisis Management Service (CMS) deals with clientele who are ‘hard to serve, and difficult to engage’ (Saskatoon Crisis Intervention Service, 2012). The activities that the CMS staff engage in, are to help coordinate services for their clients, as well as manage the cases of their clients. Many of the clientele of CSM have mental illnesses, legal issues, addictions, fetal alcohol spectrum disorder, and acquired brain injury among others. CMS works from a strength based approach.

The above logic model demonstrates the inner workings of CMS. CMS would not be able to operate without the inputs that it receives. CMS receives funding from many agencies. CMS is part of the Saskatoon Crisis Management Service along with Mobile Crisis. CMS shares some of its funding inputs with Mobile Crisis. The funding that is shared is from the United Way, Social Services, and the City of Saskatoon. Other funding agencies contribute to CMS, these include, Mental Health and Addiction Services, Community Correction, and private donations. Other inputs that are necessary for CMS to operate are the 6 staff that CMS has, the vehicles that they use to take their clients to appointments and services, and relationships with other agencies.

These inputs feed into activities that CMS engage in. CMS is an organization that provides their clients with crisis intervention, assertive outreach to organizations, behaviour shaping and management, to help with basic needs, such as food, clothes, and shelter, to provide service coordination, to screen, assess, and consult for/with their clients, to built a network of support for their clients, to advocate, inform, educate, and train their clients, to assist other frontline workers, and to make necessary referrals to other agencies. These various activities feed to different participants, of which the CMS services reach. Therefore, participation in these activities include the clients and their families, other agencies, society, and family physicians.

These levels of participation then feed into short-term goals of CMS. The short term goals of CMS include providing necessities for their clients such as food, clothes, and shelter, to connect them with medical professionals, to connect them with legal aid, and to connect them with addiction services. Medium goals are goals which CMS hopes to accomplish through the short term goals. The medium term goals include teaching their clients through modeling how to manage their money, to help them learn life skills, to help them learn about addictions, and to help their clients be better able to manage their own lives. Finally, the final goal of CMS is to have their clients be able to live at their own optimal level of independence in the community. It is important however to acknowledge the various assumptions and external factors that can influence these services (see the logic model).

Assignment 3 Program Evaluation Worksheet

Program Evaluation of Crisis Management Service (CMS)
Engage Stakeholders:
Who should be involved?
            For this program evaluation the stakeholders that should be involved include CMS, their staff, and the clients that CMS serves.
How might they be engaged?
            The CMS staff will be engaged through focus groups and meetings to determine their goals and ideas for this program evaluation. The information that I obtained though these meetings will be used to guide the direction of the program evaluation. The CMS staff can also provide information about the organization as they see it, and help to formulate a list of potential participants for this program evaluation. The clients of CMS will be engaged through interviews. These interviews will be used to determine the strengths and weaknesses of CMS through the view of its clients. The final data will be summarized in a report.
Focus the Evaluation:
What are you going to evaluate? Describe the program (logic model).
            CMS would not be able to operate without the inputs that it receives. CMS receives funding from many agencies. CMS is part of the Saskatoon Crisis Management Service along with Mobile Crisis. CMS shares some of its funding inputs with Mobile Crisis. The funding that is shared is from the United Way, Social Services, and the City of Saskatoon. Other funding agencies contribute to CMS, these include, Mental Health and Addiction Services, Community Correction, and private donations. Other inputs that are necessary for CMS to operate are the 6 staff that CMS has, the vehicles that they use to take their clients to appointments and services, and relationships with other agencies.
These inputs feed into activities that CMS engage in. CMS is an organization that provides their clients with crisis intervention, assertive outreach to organizations, behaviour shaping and management, to help with basic needs, such as food, clothes, and shelter, to provide service coordination, to screen, assess, and consult for/with their clients, to built a network of support for their clients, to advocate, inform, educate, and train their clients, to assist other frontline workers, and to make necessary referrals to other agencies.
These various activities feed to different participants, of which the CMS services reach. Therefore, participation in these activities include the clients and their families, other agencies, society, and family physicians. These levels of participation then feed into short-term goals of CMS.
The short term goals of CMS include providing necessities for their clients such as food, clothes, and shelter, to connect them with medical professionals, to connect them with legal aid, and to connect them with addiction services. Medium goals are goals which CMS hopes to accomplish through the short term goals. The medium term goals include teaching their clients through modeling how to manage their money, to help them learn life skills, to help them learn about addictions, and to help their clients be better able to manage their own lives. Finally, the final goal of CMS is to have their clients be able to live at their own optimal level of independence in the community.
What is the purpose of the evaluation?
            The purpose of the evaluation is to determine client’s views on the programs strengths and weaknesses so that CMS can improve their services to benefit their clientele.
Who will use the evaluation? How will they use it?
CMS staff - To determine where their clients think improvements need to be made in their services, and to make the improvements they feel are necessary
CMS clientele - To have a say in the quality of the services they access
Community-University Institute for Social Research (CUISR) - To adhere to their mandate of bridging university research and community needs. To add to their publications for public knowledge
The researcher (Terra Quaife) - To generate a report for CMS and CUISR, and to generate data for her thesis to add to the research literature
What questions will the evaluation seek to answer?
1.      Are the goals of CMS consistent with the services that their clients receive?
2.      Are the CMS clients that are being served the population that needs to be served?
3.      Is the process of client engagement with CMS contributing to the desired goals of CMS?
4.      What are the strengths of CMS?
5.      What are the weaknesses of CMS?
6.      Are clients generally satisfied with CMS?
What information do you need to answer the questions?(what I wish to know and how will I know it?)
Are the services that clients receive in line with CMS’s goals? If I notice similarities between services that the clients say they obtain through the data and the goals of CMS (outlined in the logic model)
Are the clients using the services? If clients say through the interviews the different services they use that are facilitated through CMS
The clients perception of CMS strengths - Clients will be directly asked about the perceived strengths of CMS
The clients perception if CMS weaknesses - Clients will be directly asked about the perceived weaknesses of CMS
Client general satisfaction - Clients will be asked to indicate anything about CMS that they would change and if they are generally satisfied with the services they receive
When is the evaluation needed?
            The deadline for the final program evaluation has been negotiated to be May 31, 2012.
What evaluation design will you use?
            The evaluation will be a formative evaluation since the program is ongoing. It will use the ‘process’ component of Stufflebeam’s CIPP model as a framework for this evaluation.
Collect the Information
What sources of information will you use?
Existing information: Research on crisis management programs, research on program evaluation.  Furthermore, CMS had a program evaluation completed in 1988. This program evaluation, although it had a different focus (financial), will be used to help inform the current evaluation.
People: CMS staff, CMS clients.
What data collection method will you use?
            The data collection methods that will be used are document review, interviews, and focus group meetings. Document reviews will be conducted to have a foundation of what crisis management is, and to explore what the program evaluation found that was completed in 1988. Interviews will be conducted with the CMS clientele to understand what they want out of the services they receive, and the focus groups and meetings will be conducted with CMS staff in order to understand what CMS wants out of the program evaluation. This will allow them to participate in the initial stages of the program evaluation.
Instrumentation: What is needed to record the information?
            Tape recorder, computer. Notes will be taken during the focus group and staff meetings. A tape recorder will be used during participant interviews to ensure accuracy of transcription, notes will also be taken as a backup.
When will you collect data for each method you’ve used?
Document review – this data will be collected during the program evaluation (mostly prior to) and during the program itself.
Focus groups/meetings – this data will be collected throughout the program evaluation and the program itself.
            Interviews – this data will be collected throughout the program evaluation and the program itself.
Will a sample be used?
             A sample will be used. For the focus group and meetings all 6 CMS staff will be present. The procedure used for sampling the CMS clientele will be purposeful sampling. The CMS staff will compile a list of active and assessment clients (those who are currently using their services) and 10-15 participants will be chosen (depending on theoretical saturation). To obtain a purposeful sample, client’s sex and age will be used.
Pilot Testing: when, where, how?
            No pilot testing will be done for the questions that will be asked in the interview with clients. Due to time and money constrains and due to the vulnerable nature of the clientele, it is not feasible to conduct a pilot study first.
Analyze and Interpret
How will the data be analyzed?
The researcher (Terra Quaife) will be responsible for data analysis. The data analysis method that will be used is the general inductive approach. The general inductive approach is a method used to find significant themes from raw data in order to answer the research questions without the restraints of one methodology in particular.
How will the information be interpreted – by whom?
            The information will be interpreted independently by two researchers to ensure trustworthiness of the data.
What did you learn? What are the limitations?
            It is hard to say what the limitations are currently; however, one could speculate that a limitation may be that only a couple stakeholders were engaged in this program evaluation. The program evaluation would be more comprehensive if multiple stakeholders were engaged in the program evaluation. Furthermore, it would be a more comprehensive program evaluation if all elements of the CIPP model were used instead of only the ‘process’ component. Lastly, due to the voluntary nature of the program evaluation some data may not be obtained if participants do not wish to participate.
Use the Information
How will the evaluations be communicated and shared?
            The evaluation will be made into a publication through CUISR, and therefore, will be made available to the public. A copy of the publication will be given to CMS and they have agreed to give access to the publication to all of their clients. Furthermore, the researcher (Terra Quaife) will be analyzing the data for her thesis. Once the thesis is complete, it will be available online through the University of Saskatchewan library.
Next steps?
            CMS and CUISR would also like the researcher to complete a phase 2 program evaluation including stakeholders that are involved with CMS. These stakeholders will include Mental Health and Addictions Services, a psychiatrist, Client Patient Access Services, and the Provincial Court.
Manage the evaluation
Human subjects protection
            Steps have been taking to ensure that the participant’s identity will remain confidential. The staff at CMS will not know which clients were chosen to participate. Furthermore, ethics have been received through the University of Saskatchewan Behavioural Research Ethics Board, thus ensuring the protection of this vulnerable population.
Management chart and timeline – see picture 1

Responsibilities
            The researcher (Terra Quaife) has the responsibility to perform the literature review, to set up meetings with CMS staff, to complete the ethics application, to perform the client interview, to transcribe and analyze the data, and to write up the final report for dissemination. The CMS staff hold the responsibility of funding the program evaluation, attending meetings, disseminating the finding to the CMS clients, and making the changes from the results of the program evaluation as they see fit.
Budget – see picture 2

Standards
Utility
            The utility of this program evaluation will be ensured due to how the request for this program evaluation came about. The CMS staff asked for an evaluation of their program, and furthermore, they will be involved in many stages of the evaluation. It therefore stands to reason that the results of the program evaluation will be utilized by the CMS staff in the betterment of their program.
Feasibility
            This program evaluation is realistic, prudent, and frugal. As this outline suggests, the program evaluation is realistic: due to time restrains and financial restraints, the program evaluation was tailored to meet these requirements. There are no elements listed above that are unnecessary to carry this program evaluation out, it is therefore prudent and frugal.
Propriety
            Care will be taken by the CMS staff to ensure that the list of potential participants the researcher received are all capable of giving informed consent. Furthermore, there is no obligation to participate in this study, and ethics approval was obtained from the University of Saskatchewan Behavioural Research Ethics Board.
Accuracy
            The use of an external researcher will help to ensure accuracy of the reported findings. Furthermore, the methods that are used to collect and analyze the data are shown to be effective and true through research that has been done on these methods previously.

Friday, 20 January 2012

Assignment 2

The program that is going to be reviewed is entitled Description and evaluation of a prenatal exercise program for urban Aboriginal women. This program seeks to reduce the incidence of gestational diabetes mellitus (GDM) and Type 2 diabetes in Aboriginal women and their offspring through exercise. I have chosen to use a mixture of models for evaluating this program. The three models that I think would be appropriate to evaluate this program are Scriven’s model, Stake’s countenance model, and Stufflebeam’s CIPP model.
In terms of Scriven’s model, since this program has already been implemented and has come to a completion, this program evaluation will be a summative one. In this sense, the program evaluation will look at the program’s success of meeting its goal of reducing the rate of GDM and Type 2 diabetes in Aboriginal women and their offspring through exercise. Since this program evaluation is summative, there will be no opportunity to modify the program as it is implemented. The summative nature of this program evaluation however, will allow for the continuance, modification, or cessation of the program in the future.
Stake’s countenance model will also play a role in this program evaluation in two main ways. First, in order to receive a clear picture of the impact of this program on Aboriginal women, their lives, and future generations, both anecdotal and descriptive data will be utilized. In doing this, this program evaluation will use both qualitative and quantitative data to ensure an accurate portrayal of the program is achieved. Anecdotal questions that will be asked include:
  • Have you noticed a difference in your health since taking part in this program?
  • Were you generally satisfied or dissatisfied with the program?
  • Is there anything that you would like to see changed if the program were to run again?
Descriptive data would include:
  • Medical history of GDM and Type 2 diabetes
  • Age of the participant
  • The usual diet of the participant
  • Correlation between amount of sessions attended and propensity of GDM or Type 2 diabetes in offspring
  • Rate of GDM and/or Type 2 diabetes in the offspring of the women involved in the program
Another component of the countenance model that is applicable to this program evaluation is the issue of contingency: what is the relationship between exercise and the reduction of GDM and Type 2 diabetes in the offspring of Aboriginal women? This question may prove hard to answer given the abundance of possible interacting and confounding variables in this study. Given prior research however, if there is a reduction in incidence of GDM and Type 2 diabetes in the offspring of the women who participated in this program, one could make a logical connection between prenatal exercise and lower incidence of GDM and Type 2 diabetes in offspring however, one should not accept these results blindly. Questions that need to be asked to determine contingency include:
  • Could the nutritious snacks after the exercise have had an influence of GDM or Type 2 diabetes?
  • Could the free educational materials that were given out have caused the participants to start leading healthier lives which may contribute to the reduction of GDM and Type 2 diabetes?
  • Were the women using the pool or doing other exercise at times other than in the program?
  • Are the women on any type of medication that may interact with GDM or Type 2 diabetes?
  • Could the act of forming relationships with the other women in the program promote a healthier lifestyle and a support network causing the participants to be less stressed and less prone to GDM or Type 2 diabetes?
  • Or, is there a combination effect where exercise along with these factors contributes to decreased GDM or Type 2 diabetes?
Lastly, Stufflebeam’s CIPP (context, input, process, and product) model, more specifically a process and product evaluation, will be used to evaluate this program. In terms of a process evaluation, some of the questions that need to be asked have been described in this post previously. These questions will help to clarify and interpret the outcomes. To recap however, these questions would include (as mentioned above):
·   Could the nutritious snacks after the exercise have had an influence of GDM or Type 2 diabetes?
·   Could the free educational materials that were given out have caused the participants to start leading healthier lives which may contribute to the reduction of GDM and Type 2 diabetes?
·   Were the women using the pool or doing other exercise at times other than in the program?
·   Are the women on any type of medication that may interact with GDM or Type 2 diabetes?
·   Could the act of forming relationships with the other women in the program promote a healthier lifestyle and a support network causing the participants to be less stressed and less prone to GDM or Type 2 diabetes?
·   Or, is there a combination effect where exercise along with these factors contributes to decreased GDM or Type 2 diabetes?
Since this program has already been completed, and the goal of the present evaluation is to determine if the exercise program was effective in reducing GDM and Type 2 diabetes in future generations, it makes sense to include a product evaluation in this program evaluation. Questions that would assess short term and long terms outcomes include:
·   Do the participants feel healthier after being part of this program?
·   Do participants feel supported after being part of this program?
·   Has there been a change in diet/exercise habits of the participants since the program?
·   Do the participants have more knowledge about pregnancy, exercise, GDM, and/or Type 2 diabetes since this program?
·   Is there a reduction in GDM increasing to Type 2 diabetes in the women since taking this program?
·   Is there a reduction in the propensity of GDM and/or Type 2 diabetes in the offspring of the women who were involved in this program?
I would have liked to have included Lincoln and Guba’s naturalistic model as well, but due to the nature of the program, having been completed prior to the evaluation, this is not possible. I think that if this model had been part of the program evaluation, the evaluator could recieve valuable first hand information about the interactions between the women in the program as well as interactions between program facilitators and participants. By including this component, more clarity may have been possible in terms of interacting or confounding variables affecting the outcomes of this program.
In conclusion, it is my opinion that the integration of Scriven’s model, Stake’s countenance model, and Stufflebeam’s CIPP model ensure a comprehensive program evaluation. By utilizing elements of various models, and by combining various methods of data collection, a clearer picture can emerge of the relationship between exercise and GDM or Type 2 diabetes in the offspring of Aboriginal women. Furthermore, the utilization of these models allows for more clarity about the variables that could interact with or influence exercise and GDM or Type 2 diabetes. Lastly, the models that were chosen for this program evaluation were chosen specifically for the goal of this program: to reduce incidence of GDM and/or Type 2 diabetes in the offspring of Aboriginal women.
Reference
Klomp, H., Dyck, R., & Sheppard, S. (2003). Description and evaluation of a prenatal exercise program for urban Aboriginal women. Canadian Journal of Diabetes, 27, 231-238.

Saturday, 14 January 2012

Assignment 1

            The program evaluation (PE) that is being reviewed is entitled Process and outcome evaluation of an emergency department intervention for persons with mental health concerns using a population health approach. In this article the authors sought to evaluate the introduction of a mental health triage and a mental health counsellor on an emergency department’s (ED) dealings with people with mental health concerns. This review will outline the models that were used in formulating this evaluation, will explore various strengths and weaknesses, and I will mention some personal thoughts on this PE.
            Rarely only one model is used when formulating a PE; four models were used in the current PE. Stake’s countenance model states that PEs should include both anecdotal data and descriptive data. In the present PE qualitative anecdotal evidence was obtained from various stakeholders and descriptive data was also obtained through a quantitative component. Another component of the countenance model that is relevant to the current PE is the question of if the intervention is being implemented in relation to the objectives. This directly relates to the question posed by the countenance model: is there congruence between what is intended and what is observed? A third component of the countenance model that is applicable to the current PE is the question of if there are logical connections between an event and purpose. This is seen in the current PE through the observation of the connection between the mental health triage and a mental health counsellor and the PE goals, or purpose.
            An element of Stufflebeams’s CIPP model was also used in the current PE. A premise of the CIPP model is that PEs can look at any of the four components (C-context, I-input, P-process, and/or P–product) that are part of a program. The current PE focused on the process and outcome, or product of the program. In this sense, the CIPP model was utilized.
            Scriven’s model is also apparent in the current PE. Scriven stated that PEs can be goal or role focused. The current evaluation is goal focused in that, part of the focus is on the outcomes of the program. Furthermore, there appeared to be a focus on what the goals of the program were when identifying the measures and research questions that were used in this PE.
            Lastly, Rippey’s transactional model has appeared to influence the current PE. Part of the transactional model is to include all those who are affected by an issue. The current PE included many of the individuals who are affected by the program, from psychiatric consultants, to the families of individuals with mental health concerns. Another component of the current PE that falls in the realm of the transactional model is that the program was implemented because of the potentially damaging situation of misuse/non-use of resources for people with mental health concerns in the ED.
Even though this PE used multiple models to inform its implementation, there are still weaknesses of both the PE and the models that were used.  In terms of weaknesses of the PE, a number of the pre and post intervention participant characteristics were significantly different from each other and therefore, the results of this PE need to be seen through this lens. Secondly, although long-term data were reported, the authors caution relying on this data since not enough time has elapsed to truly measure long-term outcomes. In terms of weaknesses of the models that were used for the PE, although there was mention that the program was implemented partially due to the fact that treating people with mental health concerns the same as those without was creating a strain financially, there was no analysis of the change in cost with the implementation of the new program. This may have been analyzed had Provus’s discrepancy model been used. Furthermore, the goals of the program were known a head of time and therefore, there is the possibility that the authors may have been affected by ‘tunnel vision’.
Strengths were also present both in terms of the PE and the models that were used. One of the strengths of the PE was the use of mixed methods. The inclusion of both qualitative and quantitative data allows for triangulation and therefore more validity. Another strength includes utilizing components of many models, allowing for a more inclusive PE. One of the strengths of the models that were used was to include many stakeholders in the evaluation. By including many stakeholders, a truer picture of the program is obtained. Lastly, the connection that was made between the intended use of the program and what was observed in the PE can be seen as a strength.

Personally, although I think that the program that was evaluated was a valuable contribution to the mental health care field in Ontario, Canada, I thought that this PE could have been written more clearly. I found that I had to re-read parts of the evaluation to truly understand what the authors were trying to say. Furthermore, I think that the PE would have been more valuable had it been conducted during the program implementation as opposed to after. This would have allowed the people who were involved in the program to respond in real time to the needs of the clients. Lastly, although the authors reported the results of the PE, and the results appeared to be positive in terms of the usefulness of the new program, the authors did not mention if this PE was used in changing the current program, or if this PE had any effect on the program at all. It is my view that PEs should be conducted with the intent of having an actual impact on the program, and it would have been beneficial if the authors would have explored this. On a positive note, I thought that the author’s use of five different datasets to be valuable because this allowed for more rich data and results. Furthermore, I found the background section of the program very helpful as setting up why this program was needed, and I enjoyed that the PE was Canadian research. This PE is applicable to me because of my thesis topic and work that I am doing with Community-University Institute for Social Research. It was great to be able to read an article and have it be utilized in a few areas of my life.
Many PEs make use of different models for implementation. The current PE utilized Stake’s countenance model, Stufflebeam’s CIPP model, Scriven’s model, and Rippey’s transactional model. Although this program evaluation had some weaknesses, not only in terms of the PE, but also in terms of the models that were used, this PE also had many strengths, including using mixed methods and mixed models.
Reference
Vingilis, E., Hartford, K., Diaz, K., Maitchell, B., Velamoor, R., Wedlake, M., & White, D. (2007). Process and outcome evaluation of an emergency department intervention for persons with mental health concerns using a population health approach. Administration and Policy in Mental Health and Mental Health Service Research, 34(2), 160-171.