Identify the types of individuals who provided the information or data for the evaluation

Process Evaluation Discussion Responses

Select one of the scenarios describing a program in Chapter 8 of Dudley (2014; pp. 167-210) and review the challenge that was addressed with a process evaluation. Challenges are identified in italics at the top of each description. Skim through the list of already posted discussions, and ensure that you are the first or second student to use the scenario for your post. To ensure a wide range of examples, no more than two students may use the same scenario.

Enter your choice of scenario in the discussion forum as soon as you select it. You may return to complete the discussion later.

  1. Determine the purpose for the evaluation. If one is not provided, infer it from the description.
  2. Review the Process Evaluation Plannerin Getting to Outcomes (GTO)(Rand Corporation, 2019) to see how an actual process evaluation is conducted (https://www.rand.org/pubs/tools/TL259/step-07.html). Scroll half-way down the page; it is located under Tools Used in This Step. The goal for this discussion is to apply that approach.
  3. Find four questions in Table 2 that you can answer about your selected program, either from information provided or from information you can reasonably infer about it. Delete the rest of the questions so your table does not get too large
  4. Post your responses to the questions in Table 2 (below) if provided in the description in Dudley (2014) or if you can reasonably infer them from the context of the scenario.
  5. Write a brief summary of the findings from your evaluation of the program.

Format for your discussion post. (Due by the end of day on Wednesday). Please label your response as “Original Post”.

  1. Complete Table 2 below, which is a process evaluation table. Include responses to your four selected questions as follows.
    • Name the scenario you intend to use for the discussion from Chapter 8 of Dudley (2014) at the top of the table. Include the page number where the scenario is found. (1 sentence)
    • Identify the purpose of your selected process evaluation. Most purposes are described in the scenarios in the textbook, but if you have to infer it from the description, choose one of the purposes identified in the lists at the top of this discussion post. Place the purpose in the second line of the table. (1 sentence)
    • Select four questions (not all of them!) listed in the first column of Table 2 that you can reasonably answer about your program. If the scenario describes the question but not the responses the evaluators obtained, provide a plausible response from your interpretation of the scenario and indicate that it is your suggested answer.
    • Identify the types of individuals who provided the information or data for the evaluation in the scenario. If no respondents are described, suggest several who could reasonably provide the best answer to your selected question (one informant per question). List the informants for the process evaluation by their roles (staff, leaders, participants, etc.) and insert them in the table below. If the evaluation has not been conducted, list at least two types of informants who might provide useful information for the evaluation and their roles, and indicate that these roles are suggestions for the evaluation.
    • In the third column, provide an answer to the questions you selected from the description of the scenario in Dudley (2014). If no answers to the questions are given in the scenario you selected, provide a plausible answer that can be inferred from the scenario.

Table 2: Process Evaluation

Selected scenario from Dudley (2014) and page number:
Purpose of the evaluation (if none is provided, select one from the lists at the top of the discussion)
Process Evaluation Questions (Select 4; delete the rest)Name the role of the informants who provided an answer to the selected question OR suggest an informant who could provide valid answers for the evaluation. (brief word or phrase)In column 3, enter the data collection method used to obtain responses to your selected questions and the responses that were obtained. If not provided in Dudley (2014), list the data collection methods you could use and the types of responses you might expect, based on the scenario description, and indicate that you are estimating the response. (1-3 sentences.)
What interventions were planned and what were the desired outcomes of the interventions?
What were the characteristics of program participants? Were these people in the group that the program was designed to serve?
How much did the program participants use the program compared to the amount that the planners intended? Were the locations and times adequate?
Did the program describe efforts made toward cultural responsiveness and inclusiveness of staff, program activities, and/or recruitment of participants?
How closely did the program follow the Logic Model? Were all program components delivered as planned? If not, why not?
What problems or barriers arose to prevent delivery of program components as planned, if any? Was it possible to work around the obstacles?
How effective were the program activities in addressing the need for which the program was designed? Which were most/least effective? Were there any unintended outcomes (positive or negative?
What was the quality of the program components that were delivered? How well did staff meet expectations for performance?
What was the staff’s (including volunteers) perception of the quality of the program services, adequacy of resources, and their treatment by program administrators, participants, and stakeholders?
How satisfied were the participants with the program as delivered?
Should the program be continued or repeated? If so, what changes are recommended for the tasks and methods used in the program, if any?

Adapted from Rand Corporation (2019). Getting to outcomes; Step 07. Process evaluation. Retrieved from https://www.rand.org/pubs/tools/TL259/step-07.html

  1. Summarize the information obtained in the evaluation. Include the following information (4-10 sentences MAXIMUM):

2a. What strengths and weaknesses of the program were identified? (1-3 sentences).

2b. From the description given in Dudley (2014) does it appear that your selected program was implemented as planned? If not, provide the explanation for diverting from the program plan, if any. If the scenario lacks a description about the program being implemented as planned, which would be a red flag for most auditors, note this lack of information and explain why a description of program fidelity is important in a process evaluation. (2-4 sentences)

2c.Include any information regarding client perceptions of the program. (1-3 sentences).

  1. Explain how the method of obtaining the answers to the questions (review of records, interview and questionnaires with open and/or closed questions, focus groups, phone calls, etc.) might affect the results obtained. (See the data collection methods handout used in previous weeks for ideas.) What methods could you use to ensure that the findings and responses of informants were valid? (Maximum: 2 sentences)

 

subject: Social Science

 

Please choose one scenario. I have to post the scenario in the class before someone else choose it because it is two students per scenario. Thanks

 

Scenarios in the chapter relating to the issue of implementing a process as intended include “Example of an Overlooked Target Group” on pages 173-174 and “Implementing an Evidence-Based Manual in a Domestic Violence Program” (p. 178).

Staff orientation and training, ongoing consultation and coaching, and staff adherence to intervention protocol is followed by the example/scenario “Adherence and Competence in Parent-Child Interaction Therapy” on page 185.

The issue of outreach to diverse populations is followed by the scenario entitled “Adults with Developmental Disabilities” on page 188.

Issues related to access are presented on pages 188-192, and includes two scenarios: “Access Issues for African Americans’ Use of Hospice” (pp. 190-191) and “Barriers to Using a School Program” (p. 191).

12 hours ago

Attached is the table I tried to copy and paste – table attached

 

 

Could you let me know which scenario you will like to use before you start the assignment? Just to make sure it’s not already taken?

11 hours ago

Improving How Programs and Practice Work Robert Herman-Smith and James R. Dudley How is the intervention supposed to be implemented? How is it actually implemented? This chapter addresses these implementation questions and others. It does this by focusing on some of the issues of implementation studies during step 4 (plan the evaluation) of the Seven Evaluation Steps. Documenting and monitoring how an intervention is implemented are vital areas of eval-uation and essential for program integrity. During the implementation stage, many questions are asked and answered that revolve around the theme of how well the program or practice approach is working. (See figure 8.1.) Implementation refers to the intentional application of interventions and strategies to solve problems in real-world settings (Mitchell, 2011). Implementation evaluation is primarily concerned with and investigates the procedures and processes established to carry out the program. In other FIGURE 8.1 Evaluation of the Intervention Needs Interventions Outcomes

words, how well have agencies followed procedures to support a new inter-vention and how have these changes impacted clients? The implementation stage of an intervention is an opportune time to conduct a variety of evaluations. These evaluations can raise numerous important, if not essential, questions about the integrity of a program or practice approach. For example, are all the required components of a pro-gram implemented as planned? What program components seem to work and which ones do not? Has a team of qualified and competent staff mem-bers and volunteers been hired to provide the designated services? Are the key parties (e.g., administrators, staff, volunteers, and clients) communicat-ing adequately? Many of the types of implementation evaluations covered in the chapter are identified in table 8.1, along with some general evaluation questions that they raise. Although this is not intended to be an exhaustive list of evaluations of implementation, it offers numerous examples of what is important. LINKING THE INTERVENTION TO THE CLIENTS’ PROBLEMS During the input or planning stage, major attention is focused on the prob-lems and needs of prospective clients. As stated in chapter 6, a need is an aspect of a larger problem identified by a client that is perceived to be TABLE 8.1 Types of Implementation Evaluations Types of implementation evaluations 1) Linking the intervention to the clients’ problems 2) Implementing the intervention as proposed 3) Adopting and promoting evidence-based interventions 4) Focus on staff members 5) Accessibility of the intervention 6) Program quality 7) Client satisfaction Some overall questions asked 1) Does the intervention address the causes of the clients’ problems that are the addressed concern? 2) Is the actual intervention being implemented as it was intended or proposed? 3) Is there evidence from manuals, evaluations, and/or clinical practice that an intervention works? 4) How are staff members selected, trained, involved, etc.? 5) How accessible is the intervention to all of the intended target groups? 6) Is the level of quality of the inter-vention high enough to satisfy the stakeholders? 7) How satisfied are the clients with the intervention?

amenable to change. Meeting a set of needs is the intended focus of a pro-posed program. Another issue is also important to explore: what are the underlying causes that prevent the need from being met? This is a critical question because the proposed program is expected to address the under-lying causes. An example of the logical link between the causes of a problem and the approach used by a program to address it is briefly illustrated in table 8.2 for the problem of child abuse. Several known causes of child abuse have been identified, including intergenerational transmission (abuse being taught from generation to generation), inadequate parenting skills, stresses associated with poverty, and isolation from important social supports. Each cause suggests a different program response. The example is somewhat sim-plistic because it infers that a complex problem such as child abuse has a single cause. Yet the example makes an important point. An intervention should be logically linked to the underlying causes of a problem. Each of the causes of child abuse suggests a response that will address it. As the exam-ple suggests, child abuse perpetrated by parents who were abused as chil-dren will not be reversed if it does not include some type of increased awareness and recognition of these intergenerational behaviors as part of the intervention. Similarly, some form of parenting skills training is absolutely essential if the problem is inadequate parental skills since abuse signals that parents need alternative disciplinary techniques. The link between child abuse and poverty can be addressed by preparing parents for a higher-paying job. Likewise, if social isolation is one underlying cause of abuse, teaching social skills and linking to healthy social contacts are logical responses. As discussed in the last three chapters, the logic model provides an important organizing framework for understanding evaluations. Introduc-ing the logic model at the implementation stage provides a framework for considering many ways to improve an intervention, to correct its course if needed, and to maintain its quality. The logic model helps focus on the sequence of steps that link the implementation of the program back to the clients’ unmet needs and forward to the clients’ anticipated outcomes or TABLE 8.2 Link between Causes of a Problem and a Logical Intervention Identified cause of child abuse A. Intergenerational cause (abusing parent was abused as a child) B. Lack of parenting knowledge and skill C. Economic stress from a low-income job D. Stress from social isolation Logical program intervention A. Facilitation of insight into intergener-ational link through therapy B. Training in parenting skills C. Increase in income through new job training and/or job change D. Peer support group of parents accomplishments. In this regard, interventions should address the problems and needs of their recipients and the underlying causes. Further, the imple-mentation of an intervention should result in the clients achieving their anticipated outcomes. For many years, agencies have been adopting the reasoning behind the logic model in requirements for most grant proposals. Grant writers are expected to document such things as the links between clients’ problems and the program approach they propose to implement. In brief, a convinc-ing explanation needs to be mounted to the funding agency for how a pro-posed program can help clients resolve the problems of concern. For this reason, implementation evaluation is most concerned with the “program and practice activities” section of the logic model. Some important implementation questions are raised as a result of the logic model. Does the program’s approach seem to be directly linked to clients’ problems? Furthermore, is there evidence that the approach can provide solutions to these problems? According to Pawson and Tilley (1997), an evaluation of the links between the causes of a problem and the program approach answers three key questions: 1. What are the mechanisms for change triggered by a program? 2. How do these mechanisms counteract the existing social processes? 3. What is the evidence that these mechanisms actually are effective? Alcoholics Anonymous (AA) offers an example of an approach to a social problem—substance abuse. Implementation evaluations are interested in how programs actually work; for example, what are the mechanisms of an AA support group that helps people overcome the addictive tendencies of alcohol? Is it, as the philosophy of AA suggests, the spiritual ideology and message of the twelve steps? Is it the support that comes from others going through the same struggles? Is it a combination of spiritual ideology and social support from others struggling with substance abuse? Or is it some-thing else? Evidence of what makes AA work for so many people could par-tially be found in the answers to these questions from an evaluation of a rep-resentative sample of some of the thousands of AA programs that meet regularly across the country. Evaluation studies also have to answer the question about the social and cultural conditions necessary for change to occur among program recipi-ents. In other words, how are the sociocultural factors recognized and addressed within a program? A new mentoring program for young African American men who did not have an adequate father figure for bonding pro-vides an example. Several sociocultural questions could be asked of an agency sponsoring such a program. For example, to what extent and how does this program recognize the sociocultural factors? Are older African American men available to serve as mentors? Are the mentors capable of providing some of the missing pieces in well-being that these teenagers need? Do the mentors have any training or other preparation in male bond-ing based on an evidence-based curriculum? One evaluation identified the essential elements of a program for pre-venting crimes in a housing complex for low-income residents. The evalua-tion team identified ten key elements of a crime prevention housing pro-gram that would be needed based on evidence of prior programs with a similar purpose that were effective. Example of an Evaluation of the Essential Elements of a Housing Program Foster and Hope (1993) wanted to identify the essential ele-ments for preventing crime within a housing complex. Their evalu-ation focused on identifying a list of key elements found to be essential in the effectiveness of prior programs of a similar nature. They concluded that ten elements were essential: 1. A local housing office for the program 2. A local repair team 3. Locally controlled procedures for signing on and terminat-ing tenants in housing units 4. Local control of rent collection and arrears 5. Tenants assume responsibility for caretaking and cleaning of the open space around units with the assistance of a locally supervised staff team 6. An active tenant advisory group with a liaison to the man-agement of the program 7. Resources available for any possible small-scale capital improvements 8. Well-trained staff that delegate authority 9. A project manager as the key figure to be accountable for management of the program 10. A locally controlled budget for management and mainte-nance In the housing example, the evaluators accumulated substantial evi-dence that the successful housing complexes in preventing crime in their city had ten essential elements. Housing complexes that were not totally effective were without all, some, or even one of the elements. As the ele-ments suggest, some common themes included a housing management team with some local control, realistic expectations of the tenants, availabil-ity of important resources, an active tenant advisory council, and a collabo-rative relationship between the council and the management team.

Social Work Evaluation: Enhancing What We Do IMPLEMENTING THE INTERVENTION AS PROPOSED? Other types of questions address whether the intervention is actually imple-mented as proposed or intended. How an intervention is supposed to func-tion may relate back to an initial grant proposal or other early planning doc-uments. Implementation as intended could also be based on more current reports describing the policies and practices of programs that have been running for some time. Provision of a detailed description of an intervention as it is supposed to be implemented is a first step in this kind of evaluation. A clear description of the intervention is needed prior to monitoring how it is implemented. Therefore, it is often a good idea to begin with an accurate, written description of the intervention, whether articulated in an initial grant proposal or somewhere else. It is wise to describe an interven-tion in enough detail so that it can be replicated. An example of a program description is in a report about a visitation program for non-custodial par-ents (Fischer, 2002). The purposes of the program are to assist parents in establishing an access agreement with the custodial parent and in pursuing their legal rights and responsibilities as parents. The article documents the process of establishing and maintaining visitation agreements and identifies the principal barriers to establishing visitation. It includes a description of the policy and legal context for the program, a review of the pertinent liter-ature, a description of a pilot program, a pilot process assessment, and a pilot outcome assessment. Data are also included on the factors associated with successful visitation. The program description came from several sources, such as case files, administrative records, and results of the pilot assessments. Some further questions in attempting to find out whether a program is implemented as intended include the following: • Are the clients being served the ones proposed or intended to be served? • Are current staff members adequately qualified and trained to provide the proposed services at the required level of specialty and quality? • Are the program’s goals and objectives evident or identifiable in the way in which the program is implemented? • What happens on a typical day in a program (e.g., a daily routine study)? • How do staff from different disciplines collaborate or work together? • How are the roles of BSW and MSW staff members differentiated and complementary? Weinbach (2005) points out that new programs may need to ask differ-ent questions than older programs when it comes to how the intervention is being implemented. Newer programs may need to ask: • Is the program at its anticipated stage of development? • How many clients have been served to date?

Is the program fully staffed with qualified people? • How well known is the program in the community? • How well have sources of client referrals been developed? • In what ways is the program supported and in what ways is it being questioned within the agency and community? According to Weinbach (2005), programs that have been implemented for a few years or more and are considered more mature may ask another set of questions: • Do the services and programs appear to have the potential to achieve their objectives? • Are the services and other activities of the program consistent with the program model? • Is the program serving the clients for whom it was intended? If not, why? • Is the program visible and respected in the professional and con-sumer community? • How much attrition has there been among clients and staff? • Do staff members perceive that administrative support is adequate? • How satisfied are clients with the program? Often interventions are not implemented as they were intended or pro-posed, or they may have gone adrift of their intended course. This can occur for several reasons. Perhaps a program approach or model was not ade-quately articulated and discussed. Perhaps the program goals and objectives were not fully developed, were crafted as unrealistic, or were displaced for some changing circumstances. Also, a program could decide to change course because of the changing needs or understanding about the client population. Finally, the people in charge of implementing an intervention could be different from those who proposed and planned it. In this case, if close collaboration did not occur between the two sets of people, a lack of continuity from the planning stage to the implementation stage is likely. Also, if all or most stakeholders are not involved at least in an advisory way in both stages, there may not be enough accountability to ensure that the planning decisions are implemented at a later time. Example of an Overlooked Target Group A Head Start program was established in a local community that had an important stakeholder group, a neighborhood civic organi-zation. The organization was very concerned with the needs of local children. This group wanted to make sure that families with the least available resources and the least ability to find an alternative pro-gram for their preschool children were given top priority. Once the Head Start program fully enrolled its cohort of children, the civic group decided to find out the social circumstances of the children Social Work Evaluation: Enhancing What We Do and their families. To the surprise of some, they discovered that almost all the children were from very resourceful families with modest incomes that were likely to have access to comparable alter-native programs. Therefore, the civic group raised its concern with the Head Start organization. When it received an unfavorable response, it pursued a lawsuit against the Head Start organization demanding that because the neediest families were the mandated target group they must be served. This lawsuit eventually ended up as a class action suit that resulted in a ruling that all Head Start pro-grams in that city had to reserve a percentage of their openings for this neediest group of families. Gardner (2000) offers an example of one way to create a description of a program involving a team of stakeholders. This program was developed using the logic model. One purpose of this exercise was to provide a clear program description; another was to more fully orient staff toward the pro-gram and its workings. At one point, some general questions were raised and discussed among all staff members, including “How would you describe how you go about working with clients?” and “What would be the important elements in the process of working with families?” Gradually, a diagram developed consisting of a series of boxes, each of which described a step in the process. Stage 1 described how families were encouraged to request ser-vices from this program. Stage 2 included helping families assess their strengths and the constraints they faced. Stage 3 involved goal setting. Stage 4 involved matching resources to family goals. How the family and staff worked to reach the goals was the focus of Stage 6, and Stage 7 involved completing the contract. The program description was then tested by asking some of the families, staff members, and other agencies how they perceived that the program actually worked using their experiences with it. Although the results of the interviews largely validated the proposed stages and prin-ciples that had been identified, the results also suggested the need to qual-ify and further refine some principles. Monitoring an intervention’s implementation can be done in several dif-ferent ways. Sometimes agencies conduct staff activity studies using a list of prescribed activities, such as direct contact with clients, contact with other programs on behalf of clients, phone contact with clients, record keeping, staff meetings, and so on. In some instances, the studies may be interested in finding out whether too much time is spent on one type of activity, such as record-keeping; in other instances, the interest may be in finding ways to increase time spent in direct contact with clients. These studies tend to be largely quantitative in nature (e.g., staff members tally the number of hours and minutes in each activity, each day, for a week or so).

Other evaluations attempt to find out more about the intricacies of the practice interventions provided to clients. The evaluations can be open-ended qualitative studies that identify what the social worker is actually doing on the basis of observations, videotapes, or analyzing journal entries recorded by the practitioners that describe what they are doing. Or the eval-uations can be more deductive and quantitative by examining the extent to which prescribed activities reflecting a particular practice theory or practice model are implemented. Exploration of the intricacies of a practitioner approach can be devel-oped by prescribing an intervention protocol. For example, a protocol can be encouraged for medical social workers of a home-health program when clients manifest different types of problems. A frequently encountered prob-lem in home-health settings are clients who are socially isolated, lack con-tact with friends and family, and are alone most of the time. In this case, a protocol could be to implement some or all of the following interventions: • Provide a list of resources available to the clients that can reduce their social isolation. • If clients are interested, assist with referral to a support group rele-vant to their interests and needs. • Encourage activities appropriate to their medical condition. • Explore and facilitate the clients’ expression of interests in particular activities. • Help clients express their feelings about themselves, their sense of satisfaction with their lifestyle, and any desire to change it. • Help clients explore and resolve feelings related to social isolation, such as grief from loss, a recent loss of a previous health status, or an unresolved, conflicted relationship. Once these and other activities are implemented, efforts can be made to document any evidence that the client has progressed toward specific out-comes, such as additional supports from other agencies, increased contact with others, and less time alone. ADOPTING AND PROMOTING EVIDENCE-BASED INTERVENTIONS A needs assessment should result in general agreement among stakeholders about a service gap that is contributing to a social problem. However, once the need is identified, stakeholders must also come to consensus about var-ious strategies or interventions they will use to address the identified need. This happens during the adoption phase of the intervention. As an example, a suburban community might be concerned about increasing juvenile crime. A needs assessment determines that most juvenile crime is taking place on weekdays between the hours of 2 p.m., when high school is dismissed for the day, and 6 p.m. when most parents in the community arrive home from work; therefore, more community or school-sponsored after-school pro-grams for juveniles might result in reduced juvenile crime. The basic need (more after-school programs) has been identified. However, the type of pro-gram needed to address the problem might not be a settled matter for stake-holders. Some stakeholders could believe adolescents in the community need more self-discipline. They might favor competitive sports programs or military-type programs. Other stakeholders could believe adolescents need better role models and more individual attention. They might prefer more mentoring programs. Consensus on the type of program needed is often dif-ficult to achieve, but key stakeholders must come to agreement about the type of program they believe is acceptable for addressing the problem. A central issue to be addressed in adopting an intervention is whether the intervention is evidence-based. As was reported in earlier chapters, evi-dence that an intervention works can take many forms and can have varying degrees of validity or accuracy. It also varies in terms of its efficacy from one client population to another. The best evidence is based on evaluation stud-ies using all of the characteristics of evaluations described in chapter 1. A def-inition of evidence-based interventions was given in chapter 2 stating that evidence comes mostly from both research studies using quasi-experimental or experimental designs and clinical practice. It is also important that evidence-based sources are consistent with the values and expectations of the clients who receive such interventions. Evidence indicating that an inter-vention is effective needs to be pursued and promoted whenever possible. Three general ways to promote evidence when adopting an intervention are shared next. They are using evidence-based manuals and tool kits, generat-ing clinically based evidence, and critically consuming evaluation reports. All three have the common thread of exploring the association between the introduction of an intervention and the client outcomes that are desired. Evidence-Based Manuals For the past several years, funding sources have begun requiring com-munity programs to adopt evidence-based practices, which are intervention models with demonstrated efficacy in random controlled clinical trials (Hallfors & Cho, 2007). Most evidence-based interventions have a highly prescriptive treatment protocol, that is, they expect practitioners to carry out a set of activities with clients in a particular sequence. Treatment man-uals and tool kits are more likely to be used in health and mental health services than in other types of social work services. Part of the appeal of evidence-based manuals is that they have a body of evidence that supports their ability to create client change. However, a growing body of health and mental health services research has shown that efficacious interventions (those that yield positive outcomes when delivered under optimum conditions in research clinics) might not be as effective (produce desired outcomes for their target populations) in “real world” settings (Fixsen, Naoom, Blase, Friedman, & Wallace, 2005; Hallfors & Cho, 2007; Proctor, Silmere, Raghavan, Hovmand, Aarons, et al., 2010). Furthermore, some practices can be difficult to implement, especially when dealing with client populations that are transient or dealing with major life stressors associated with poverty, or such things as unreliable transporta-tion, or problems locating trustworthy child care. There are at least three major concerns about the adoption of evidence-based practices available in manuals. First, many program directors and practitioners have been reluctant to embrace these evidence-based practice manuals. One reason for their reluc-tance is the perception that evidence-based practices interfere with the free-dom to do their jobs as they see fit. In other words, some practitioners believe that externally driven evidence-based practices do not value their clinical judgment. This can be problematic since mental health practition-ers, like workers in other fields, resist practices they believe have been imposed on them (Brodkin, 1997; Glisson & Green, 2006). Second, many funders demand that programs use evidence-based prac-tices without an appreciation for the investment of time and resources required to make them work as intended. There is a large body of literature showing that implementing evidence-based practices takes at least two years, often longer (Durlak & DuPre, 2008). Funders must be willing to give programs the time, funding, and other resources needed to hire the right staff with the kinds of degrees, certification, and experience to perform skills required by the practice, as well as time to integrate new practices into the agencies where they are delivered. Programs must also devote consider-able resources to training their staff to implement evidence-based programs if they expect them to be effective. Third, evidence-based practices must be a good match for the problem and the client population for which they are implemented. A clinic-based nutrition program for pregnant teens delivered in public health clinics might be highly effective in an urban or suburban area. Urban and suburban clinics are likely to run a number of clinic sites, and potential clients might be able to rely on a well-developed public transportation system to get them to visits. The same program might fail in poor, rural areas since it requires teenagers to visit health clinics that are spread out across large geographic areas. Reliable transportation might not be available in these areas. In brief, adopting an evidence-based intervention using a manual is not a guarantee of success. An intervention’s success ultimately rests on whether the right intervention was adopted for the right population at the right time and in the right context (Fixsen et al., 2005). Even under the best of circumstances, implementing a new intervention takes a considerable amount of support from the stakeholders. Yet, use of evidence-based manuals in combination with other initiatives such as obtaining clinically based evidence and criti-cally consuming other evaluations is definitely a worthy effort in promoting evidence-based interventions..

 

 

There are more but after you select the scenario first I can provide where it talks more about it.

 

 

Dudley, J. R. (2014). Social work evaluation: Enhancing what we do. (2nd ed.) Chicago, IL: Lyceum Books.

 

 

Answer preview…………………………………..

apa 448 words

Share this paper
Open Whatsapp chat
1
Hello;
Can we help you?