Head Start Bureau Evaluation Handbook

A Companion to The Program Manager's Guide to Evaluation


Since its inception, Head Start programs have grown and services have expanded significantly in response to dramatic changes in families and their communities. Local Head Start programs are at the forefront of these new developments and must constantly re-evaluate themselves to adapt to changes in needs and resources. This attention to program performance and continuous quality improvement is critical to the good management of a program and, as such, directly benefits the children and families served by Head Start. Program evaluation and continuous quality improvement will ensure that Head Start programs develop as learning communities.

The Advisory Committee on Head Start Quality and Expansion, in its final report, Creating a 21st Century Head Start, defined three key objectives for local Head Start programs in the next century:

In addition, in 1994 the Advisory Committee on Head Start Quality and Expansion recommended improvements in program management and increased attention to measuring program performance. Ongoing program evaluation is an integral part of good program management and necessary if local programs are to continue to adapt to changes in families, environments, and resources.

To help program directors meet these challenges, the Administration on Children, Youth and Families developed a series of evaluation guidebooks that explain program evaluation-what it is, how to do it, and how to understand it. The main evaluation guidebook, The Program Manager 's Guide to Evaluation, answers program managers' questions about evaluation and how to make evaluation benefit programs, staff, and families. Good program evaluations help to improve program operations, measure program performance and effects, and document important lessons for other programs. With this information, program managers are better able to direct limited resources to where they are most needed and most effective.

The Program Manager 's Guide to Evaluation is supplemented by this document, The Head Start Bureau Evaluation Handbook. This handbook was specifically written for you, the local Head Start program director, to explain how evaluations can be conducted by Head Start pro grams. The Head Start Bureau supports your efforts to improve your program and document your successes-this handbook is designed for you. Ongoing program evaluation is how Head Start programs will continue to "ensure quality" and "respond flexibly" for children and families into the next century.

Table of Contents

Evaluations Can Help You!
Evaluating Head Start Programs
Basic Evaluation Questions
Specifying Program Implementation Objectives
Specifying Participant Outcome Objectives
Specific Evaluation Issues for Head Start Programs
Head Start Partnerships In Evaluation
Using Available Tools to Evaluate Head Start Programs
Evaluation Helps Create Learning Communities

Head Start Bureau Evaluation Handbook

In this handbook..
Getting answers to basic questions
Specifying objectives Evaluating Head Start programs Partnerships Using available tools

The purpose of the Head Start Bureau Evaluation Handbook is to serve as a resource for the Head Start community. This handbook, along with The Program Manager 's Guide to Evaluation, will enable you (the Head Start program director, your staff, and parents) to increase your understanding of program evaluation. Evaluation is an important method of determining how Head Start programs will strive for excellence, determine expansion routes, and forge new partnerships. One key objective of this handbook is to provide the Head Start community with knowledge that will aid them in developing a route to enhance Head Start quality into the 21st century.

These are basic and vital questions to all Head Start programs-large urban programs, small rural programs, American
Indian programs, and Head Start programs serving migrant families. How can Head Start grantees chart a new course for the 21st century? How can Head Start grantees identify needed partnerships, collaborations, and coalitions with whom they can work to improve services to children and families? A well- planned and well-executed evaluation can answer all of these questions.

The Head Start Bureau Evaluation Handbook and its "sister" publication The Program Manager's Guide to Evaluation, are designed to provide assistance to Head Start programs that are seeking answers to these questions. The Administration on Children, Youth and Families and the Head Start Bureau believe that program evaluation is essential for attaining the shared goal for all programs-that of positive developmental outcomes for children and self-sufficiency of their families.

The two guides have separate but complementary functions. The Program Manager 's Guide to Evaluation provides detailed information on designing and implementing evaluation, including the following topics:

The Head Start Bureau Evaluation Handbook expands this information to help Head Start programs that, in their quest for quality, must become evaluation "literate." As programs seek to find financial support from corporations and foundations, greater emphasis will be placed on program evaluation because it provides evidence of a program's success and impact. Also, Head Start program participants are of growing interest to the research community. It is important that Head Start program managers have the knowledge to collaborate with researchers studying Head Start families and activities to ensure the quality and relevance of the research. Head Start programs, like the families and children they serve, must be empowered to take their rightful place as full partners in planning, implementing, and evaluating their programs.

To develop as a learning community, all Head Start programs must take time to review what they are doing, assess progress, identify facilitators and barriers to success, and define specific goals for staff, the children, and their families. Program evaluation simply places these steps and the information gathering you do into an organized framework. When you use the results of program evaluation to make changes in your service approaches, you are engaging in continuous quality improvement!

1994 Advisory Committee on Head Start Quality and Expansion called upon Head Start programs to prepare to move into to 21st Century and strive for excellence by:

"...developing a strategic plan for meeting continuing and/or new Head Start Performance Standards and beginning to identify outcome measures for program components to support strong outcomes.."

    -Creating a 21st Century Head Start, 1994

Evaluations can help you!

The usefulness of evaluations to program staff and families is stressed throughout both guidebooks. Some program directors and staff have had negative experiences with evaluation-experiences that have left them feeling discouraged or isolated rather than empowered. This is more likely to happen when staff do not have an understanding of the evaluation process, their role in the process, or the potential benefits that they can experience from an evaluation.

Evaluations are a helpful part of continuous quality improvement because they provide information about the way services are delivered, the quality of services, and outcomes for children and their families. Evaluations are also useful because they can validate the effectiveness of Head Start services and provide evidence to potential funders of a successful program.

The two guidebooks are intended to help you accomplish this. They are written for you, the Head Start program director and your staff. You may want to review The Program Manager's Guide to Evaluation before you read this handbook so that you will have a general framework for understanding the issues discussed here. Throughout this guide references will be made to specific sections of The Program Manager 's Guide to Evaluation that can be reviewed for further information.

Evaluating Head Start programs

Since its beginning in 1965, Head Start has provided hope and support to more than 15 million low-income children and their families across the United States. While the overall goal of Head Start remains the same-to increase the social competence of low-income children and their families-the world of Head Start is dramatically different.

The needs of children and families who live in poverty are more complex and urgent. The face of poverty continues to change. More children are living in poverty than ever before. More children live in families with a single parent who is under 20 years of age, a parent who typically has dropped out of school and lacks wage-earning skills. Complicating the issue of poverty are increased community violence, substance abuse, domestic violence, and homelessness.

Not only has the profile of Head Start families changed, there also have been dramatic shifts in the landscape of other community services. The number of new and categorical programs, organizations has increased, as has the number of agencies providing an enhanced capacity of our communities to serve young children and their families.

In addition, we have new knowledge about the attributes of services and supports that are effective in changing long term out comes for young children. It is by implementing effective evaluations and using the insights gained from these evaluations that Head Start programs can respond to changes in children, families, and communities

The Department of Health and Human Services funds a variety of programs following the Head Start model. Historically, Head Start programs serve low-income children aged 3 to 5 and their families, with component services including health, education, social services, and parent involvement. Head Start grantees and delegate agencies are funded throughout the U.S. and U.S. territories, Puerto Rico, and the Virgin Islands in urban, rural, and suburban neighborhoods. Migrant and tribal Head Start programs tailor traditional Head Start services to the needs of children from migrant or American Indian families.

What is evaluation?
Program evaluation is a systematic method for collecting, analyzing, and using information to ask basic questions about a program and its effectiveness.

Through various demonstrations, Head Start has implemented special initiatives to address priority goals such as the Comprehensive Child Development Program (CCDP), which began in 1989 and was designed to expand Head Start services to include the years of childhood from birth to age 5. In addition, specific services were added to promote family economic self sufficiency. Recently, Early Head Start programs were funded to begin intervention efforts from birth to age 3.

The Head Start Bureau also funds other demonstration and research projects that examine specific topics or services. Frequently, these grants are funded and designed with evaluation of program implementation and participant outcomes in mind. Many require the hiring of an outside evaluator or involve participation in a national evaluation. These specific efforts demand significant evaluation expertise and are beyond the scope of this handbook. However, if you are involved in a research and demonstration grant, this handbook and The Program Manager's Guide to Evaluation can help you communicate more effectively with evaluators and contribute to research efforts.

The Program Manager's Guide to Evaluation defines evaluation simply as a systematic method for collecting, analyzing, and using information to ask basic questions about a program and its effectiveness. As a Head Start program manager, you already collect lots of information-such as what is in the Program Information Report (PIR)-about your program, services, staff, and the children and families you serve. By doing this you are already engaged in an important evaluation activity.

This information should not be viewed simply as a Head Start reporting requirement. If you review it to discover trends in
your families, respond to new community needs, or make changes in program components that are not working, you are already doing evaluation!

This handbook and The Program Manager 's Guide to Evaluation are designed to show you how to "systematize" the evaluation activities that you probably already are doing informally. It is important to see these activities as part of good program operations. An effective Head Start program manager will conduct ongoing evaluation of program services in order to improve the program and make appropriate changes as part of continuous quality improvement of efforts and plans. Evaluation should not be seen as unnecessary to Head Start programs. First of all, children, families, and the communities they live in change constantly. You need information about these changes to make sure your program is responding to them. Also, in these financially tight times it is becoming increasingly important for all human services programs, including Head Start, to document their successes for the public, for the government, and for potential funders.

What evaluation is NOT. Evaluation is not monitoring. When you are assessed for compliance with Head Start Program Performance Standards, your program is not being evaluated. Monitoring is a process of checking that your program has basic components, resources, staff, and activities in place to ensure that all Head Start programs provide core services. Many Head Start Performance Standards can be restated as program implementation objectives and measured as part of an evaluation, so there can be some relationship between evaluation and monitoring. For example, a target of conducting a certain number of home visits per child per year can be stated as a program implementation objective.

Evaluation is not the same as research. Many Head Start programs may be involved in university-based research efforts. Research tests hypotheses about child or family development- questions that may or may not be of immediate concern to a program manager. Research does not necessarily focus on specific questions concerning your program and individual participants.

What makes program evaluation different from research is that it answers specific questions about your program and participants by assessing whether you have achieved the goals that you have defined for program implementation and participant outcomes.

Basic evaluation questions

Although there are many unique aspects of different Head Start efforts that must be considered in an evaluation, the basic evaluation questions presented in The Program Manager 's Guide to Evaluation are relevant to all Head Start-funded programs. These questions are:

Have you been successful in attaining program implementation objectives?

Have you been successful in attaining your participant outcome objectives? The basic design of Head Start combined with your program matic specializations and the unique aspects of your community should guide your efforts in designing a program evaluation that addresses these basic questions.

Specifying program implementation objectives

It is important to remember that a useful evaluation will assess the process of implementing and operating a program as well as the outcomes experienced by the participants. However, in order to assess this process, the goals and objectives of program implementation must be clearly stated. Even if you have been managing an ongoing Head Start program, it is very important to define clearly what you are doing and what you are trying to accomplish. For experienced and new Head Start grantees, reviewing your current operations will help you better define objectives for your program, your staff, the children, and the families.

Head Start and other child development programs provide myriad services, service approaches, activities, and interventions. Therefore, it is important to develop a framework for your program that will help you specify your program implementation objectives-that is, what you plan to do (the types of services and their duration and intensity), who will do it (the staff), and who you plan to reach and how many (the characteristics and number of children and families).

Head Start programs are fortunate in that they have an explicit set of standards that define many of these goals for program im plementation and operation. Head Start programs also have program-level information on staffing and activities available to them in the form of the Program Information Report (PIR). Al though designed as a management and reporting tool, the information contained in the PIR can be extremely useful in answering questions about program implementation.

The process of defining specific program implementation objectives is discussed in The Program Manager 's Guide to Evaluation (Chapter 5). In general, your decision your definition of objectives should be guided by your answers to the following two questions:

If you want to use the evaluation information to improve your program's services to a particular group, you may only need to focus on the implementation objectives for that population. This may be either a particular age, such as teenage parents, or a particular participant type, such as homeless children.

If, for example, you want to learn how successful you have been in attaining your implementation objectives for handicapped children, you may focus on this population and the services that this group is likely to receive. The Head Start Family Information System (HSFIS) is one tool that can assist you in the process of categorizing participants and tracking services received by individual children and families.

Specifying participant outcome objectives

Specifying participant outcome objectives first requires defining who your participants are. Head Start serves children and their families. So it is important to define objectives specific to the changes in development, economics, knowledge, attitudes, behaviors, and/or life situation that you expect to occur in chil dren and family members as a result of your services. (See The Program Manager 's Guide to Evaluation, Chapters 5 and 6).

For many Head Start programs, this task can be a complex one. The desirability of each outcome objective varies depending
upon the level of the child and family when they entered the program. A child with severe developmental delays at entry into the program should not be expected to surpass his or her peers. Instead, what is desired is that children make gains over time appropriate to their relevant developmental milestones.

The key to the success of using this type of outcome assessment approach is a carefully documented needs assessment and statement of appropriate goals for each child and family. Individualized family service plans do this by defining goals for each family member within certain domains like health or education. Goals may be stated as either short-term or long-term. It is important to remember that when a specific goal for a child is identified, it must include a description of indicators that can be used to determine whether or not the goal was achieved. How do you know that a change has occurred? For example, if a goal is to improve language skills, that goal must include a statement of how you will know that a child's language skills have improved. Selecting appropriate measures of change in children and parents is an important step in an evaluation. Chapter 7 of The Program Manager 's Guide to Evaluation will help you with this task.

Specific evaluation issues for Head Start programs

Head Start programs have been successful over the years because of their innovative design and flexible implementation. The unique aspects of local Head Start programs must be considered when you are planning an evaluation. New programs can do this as part of their initial planning stages. However, established programs can gain useful insights by completing this process as part of continuous quality improvement. It is important to think about these issues in advance of beginning an evaluation. We introduce here a few key Head Start characteristics that may effect your program evaluation.

Assessment-driven services. A unique feature of Head Start programs is that the services provided are determined by the individual needs of children and their families. That is, the program itself can offer a wide range of services, with specific services provided to participants based on individual assessments. To evaluate your success in achieving positive outcomes for participants, specific services delivered should be linked to changes in children and families. How do you know that child and family improvements are due to Head Start? You may need to group services into particular categories and identify the types or ages of children who receive specific categories of services. The Head Start Family Information System (HSFIS) is one source of family and child-level information that can assist you in this process.

Variation in children and families. You will need to consider the variation in your Head Start children and families when you define your program implementation and participant outcome objectives. The services you decide to implement, the duration of services, and the staff hired to provide services may vary depending on the characteristics of your community. For exampie, you may need to define different implementation objectives if you are targeting a new group of children (for example, children of substance-abusing families). In addition, specific out come objectives for children may depend on their individual needs as defined by assessments and individualized service plans.

Multiple developmental levels of program participants. Head Start now funds programs that serve children ranging from birth to 5 years of age and some services to pregnant women. This range means that there will be significant differences in the developmental levels of participants. In addition, parents of these children may include teenagers. Because programs must provide services to participants that are appropriate for their developmental levels, the services provided, the duration of those services, and the characteristics of the staff will need to vary depending on these developmental levels.

This variation suggests that an evaluation must clearly specify program implementation objectives for children and parents at varying ages and levels of development. Similarly, the evaluation will need to clearly identify participant outcome objectives with respect to participant developmental levels. For example, if you have a large number of teenaged parents, you might not
choose the same objectives for parent participation for them as you would for other parents.

Cultural appropriateness. Cultural appropriateness is an important issue to consider in any evaluation. Children served by Head Start come from diverse cultural, ethnic, and religious backgrounds. An evaluation must take differences into account when defining objectives, selecting evaluation measures and methods of information collection, and establishing trusting relationships for conducting interviews.

The feedback from your own Head Start community is the best resource you have for ensuring that your evaluation is culturally appropriate. Use your policy council and others to review evaluation plans and questionnaires. They can tell you which questions or methods are most effective at soliciting the needed information and most respectful of their cultures and back grounds. If you hire an outside evaluator, it is your responsibility to educate him or her about the cultural differences and unique features within your Head Start population.

Maintaining and not maintaining confidentiality. Confidentiality also is always an issue in evaluation. It must be addressed up front, before you begin collecting information about or from families. Often sensitive information is collected in an evaluation and as an advocate for the family, you must make sure that this information is kept secure. Information identifying specific families (such as names, addresses, and Social Security numbers) should never be included on evaluation materials. Always use a system of identification codes so that families cannot be identified from these materials. The master copy of identification codes matching individual families should be kept secure and separate from other evaluation and program records.

However, there are situations where confidentiality cannot be maintained. This usually occurs in the context of learning about suspected child abuse during evaluation interviews. (In other instances, this could be knowledge of drug use or other illegal activities). There are Federal and State guidelines governing the need to report suspected child abuse and you, your staff, and your evaluator must be aware of these requirements. In addition, families should be made aware of your responsibility to
report these issues before they give you any information. You probably already have addressed this issue in designing releases of information and in staff training. As part of planning an evaluation, you must make your agency policy explicit to both your families and your evaluator.

Poverty and stereotyping. Although your staff and program are experienced in serving low-income children and families, some evaluation experts are not. Just as it is important to teach your evaluation staff to be sensitive to issues of culture, it is also your responsibility to educate them about issues of poverty and the day-to-day stresses and experiences of poor families People who are inexperienced in working in impoverished communities frequently engage in negative stereotyping of poor people. You and your staff frequently are the only advocates many of these families have. Teaching your evaluation team that certain behaviors and beliefs of your families are not due to innate or cultural differences but are due to the effects of poverty and living in resource-poor communities will ensure that your families are treated with respect and that the evaluation is conducted effectively and appropriately.

Burden of evaluation on staff and families. Program managers frequently cite the cost and burden of evaluation as reasons not to do it. However, not doing any evaluation can cost a program more money when funds are used to support ineffective activities. Evaluation does take time and effort. As noted above, you already are engaging in many activities that overlap with a program evaluation, such as data collection. You do need to consider the burden on your staff and families. But very good evaluations can be conducted with minimal extra effort. You can examine your current data collection and revise it to be more efficient yet still collect evaluation information. This is one reason why staff should be involved in all planning stages of an evaluation.

Your staff know your families, know your program, and know the best methods of obtaining information about both. In addition, staff will value evaluation more once they see their successes documented and their efforts validated. They also may
have interesting insights and may suggest additional questions for the evaluation. Offering stipends, coupons, or needed supplies (such as diapers) in exchange for a family's time and effort encourages participation and demonstrates that families' time and knowledge are valued by the program.

Head Start partnerships in evaluation

Every community offers many sources of evaluation support. It is expected that program managers will need help in conducting one, even after reading A Program Manager 's Guide to Evaluation! Most Head Start programs do not have the capacity to conduct large-scale program evaluations. Because of this, Head Start programs must be proactive in expanding existing partnerships and forming new ones to enhance their evaluation and research capacity. These partnerships must include the multiple stakeholders involved with, concerned about, and supportive of programs for children and families.

Who are Head Start's partners?

Head Start families. Head Start families are primary stake-holders. It is time to revisit the role of Head Start Policy Councils, Policy Committees, and Parent Center Committees. Perhaps it is more meaningful for these groups to think about helping children by expanding their committees beyond the traditional, health, education, social services, and parent involvement components.

Parents can be active on research and evaluation, strategic planning, organizational, and professional development committees. What questions do they have about the program's operations? What outcomes would they like to see for their children? Their input is extremely important in defining program implementation and participant outcome objectives and in the planning of an evaluation. Families also have a major role in sharing continuous quality improvement of the program. Their assessment of the program's activities and their own progress toward goals is important information and should be included as part of your evaluation.

Also, parents can be partners in any research and evaluation initiatives. Parents can be employed as evaluation assistants who recruit other participants, help collect information, and perform data entry. Evaluation activities can fit within Head Start's philosophy of parent involvement and empowerment. Parents and community members also are your best "experts" in assessing the appropriateness and cultural sensitivity of information collection instruments and procedures. Which questions do parents find confusing or perhaps insulting? Can more appropriate wording or translations be found for evaluation instruments?

Agency boards and policy councils. Your agency board or Head Start policy council make excelient partners in an evaluation for several reasons. First, the diversity of these boards should reflect your community and their members are a resource for ensuring that your evaluation is planned and carried out in a culturally appropriate way. Second, you can look for evaluation expertise in current and proposed board and council members to assist you by providing evaluation advice. Third, it is important that your board or council understand the importance of evaluation to continuous quality improvement and by seeking their input early you can generate interest and enthusiasm for evaluation activities.

Also, if your board or council is not "evaluation literate," involving them in the review of the evaluation plan and report is one way to develop expertise among the board members that will be a resource to you in the future.

Local schools. State and regional education agencies and local school districts share our concern about our children. They also have a vested interest in knowing more about what programs and services work. With the increased use of technology and computerized school records, many Head Start programs and their local school districts can collaborate to evaluate the progress of Head Start children in school.

For example, the largest Head Start program in Miami, in collaboration with Metro-Dade schools and a local university, is tracking the progress of Head Start children through their school years to high school graduation. Some school districts and State education agencies have in-house evaluation divisions that can assist your program with designing and conducting evaluations as a cooperative effort. Even something simple, such as gaining access to a school's automated testing equipment, can help you conduct your evaluation.

See Chapter 4 and the appendix of The Program Manager's Guide to Evaluation for tips on how to identify potential evaluators and evaluation collaborators.

The business community. The corporate community and local networks of community agencies with whom you collaborate are just as interested as you are in assessing program outcomes Increasingly, United Way agencies and local and State departments of health and human services are requiring agencies to define expected participant outcomes and program implementation objectives at the beginning of funding to assess program progress and success.

The collective expertise in these agencies is considerable. They have evaluation units who can work with programs to develop a series of evaluation objectives and provide technical assistance in developing strategies for measuring these objectives. Many times you are working with the same or similar families, and developing multiagency shared data bases can be beneficial to agencies and participants. Head Start State collaboration projects can be helpful in developing statewide evaluation partnerships.

Colleges and universities. Universities and colleges have a wealth of knowledge and expertise to share. Faculty and students are constantly searching for research opportunities and will work in concert with human service agencies to develop joint research and evaluation initiatives. Faculty and graduate students in schools or departments of education, psychology, human development, human ecology, medicine, allied health, and social sciences make wonderful partners. Discuss "bartering" your access to and knowledge of Head Start families for help with evaluation or additional resources. Early Head Start program requirements include collaboration with a university or research organization in the development of research questions as part of continuous quality improvement. All Head Start programs can benefit from similar collaborations.

Each stakeholder may hold different perceptions of the role of program evaluation. While stakeholders may vary in their knowledge and understanding of program evaluation, each brings to the table their culture, their needs, and their wants. A Program Manager 's Guide to Evaluation (Chapter 4) discusses in detail the selection of an external evaluator. This information is essential reading for any Head Start program manager seeking an outside evaluator. It is essential that a clear understanding of the Head Start philosophy guides any Head Start program evaluation. Maintenance of the integrity of Head Start children and families must be paramount in the design of any evaluation project, as well as ensuring the maintenance of the credibility of the Head Start program.

Using available tools to evaluate Head Start programs

Head Start has several programs and activities in place that can assist local Head Start programs with an evaluation. The Program Information Report (PIR), Head Start Family Information System (HSFIS), Head Start Cost Management System (HSCOST), Self Assessment Validation Instrument (SAVI), On-Site Program Review Instrument (OSPRI), and Head Start Funding Guidance System (HSFGS) all collect, organize, and report information useful to a program evaluation. These systems collect information on program budgets and activities, child and family needs, and services delivered that can answer some of your evaluation questions. Head Start's required community needs assessment (CNA) provides information on the level of services and resources in the community at different times.

Head Start also funds a training and technical assistance net work within each Federal region to train and assist grantees with program activities, including evaluation. Evaluation help is a valid technical assistance request. Finally, national evaluations of specific efforts or components are conducted by Head Start that can contribute information useful to a local program evaluation.

Program Information Report (PIR). The Program Information Report (PlR) is an annual report completed by Head Start
programs. The PlR provides program-level data regarding staff, children and families served, and services delivered. This information, which is reported to the national Head Start Bureau, is useful for tracking trends and for creating State, regional, and national summaries of the Head Start population and services delivered. The PlR contains several items that relate to program quality-such as classroom ratios, staff turnover, staff qualifications, and the proportions of children who receive medical, dental, disability or social services during the course of the year. However, this reporting system is limited in that it does not provide child or family-specific information. Participant-specific information is needed if you are to see whether you have achieved your objectives for participant outcomes.

Information gathered for compiling a Head Start program's annual PIR can be used as milestones throughout the program year. Programs use a variety of program-specific and commercial information management systems for maintaining the records necessary for completing the PIR. When program managers examine the reports generated from these data bases on a regular basis, they can monitor what is going on at given points in time and compare how they are doing with where they should be in terms of meeting program compliance requirements. However, unlike program evaluation, this form of program monitoring will not provide answers to how and why questions of program quality.

Community Needs Assessment (CNA). Head Start requires that programs conduct a community needs assessment (CNA). A CNA identifies gaps in resources in your community. What services are currently needed by your community that are not available or accessible? A comprehensive CNA provides good baseline information about the community for a program evaluation. A CNA that is conducted on a regular basis can provide information on changing needs and resources. This way, you can see how your community is improving service delivery. CNAs can also be used as tools for measuring progress in community collaboration and for assessing the impact of your Head Start program on community services.

Head Start Family Information System (HSFIS). The Head Start Family Information (HFSIS) is another management information system developed to track child and family needs and services delivered. (At this time HFSIS has not been universally adopted by all grantees). Many of the features and data elements are embedded in other commercial data bases used by many Head Start programs. Although designed as an in-house program management tool, HSFIS provides valuable information that could serve as indicators of program effectiveness. Data related to program quality include timeliness of services, services delivered, type of provider, and types of unmet needs.

These data systems also identify specific barriers to service delivery, such as family refusal. In addition, HSFIS tracks family service needs and the match between family needs and services delivered. Because information is collected on child and family levels, you will be able to link specific services with individual families. This way you can tell if the services you provided had a positive effect on families.

Head Start Cost Management System (HSCOST). The Head Start Cost Management System (HSCOST) is yet another monitoring tool, but it is not universally required across Regions. HSCOST reports grantees' planned budgets-not actual expenditures-to the Regional Head Start offices. Examples of items included in HSCOST that could be useful in answering questions about program implementation include number of home visits per child per year and number of class room hours. Data collected on staffing, salaries, and caseloads, as well as the proportions of the budget spent on each component area might be useful as quantitative indicators of program quality.

Self Assessment Validation Instrument (SAVl) and On-Site Program Review Instrument (OSPRI). All Head Start programs are required to complete an annual self-assessment. While there is no mandated process for accomplishing this annual self-assessment, there are two generally accepted instruments: the Self Assessment Validation Instrument (SAVl) and the On-Site Program Review Instrument (OSPRI). The use of these instruments varies by region.

The OSPRI is the monitoring process used by the national Head Start Bureau during on-site reviews to assess grantee compliance with Head Start Performance Standards and regulations. The OSPRI contains 222 items addressing all of the Head Start component areas including education, health, disabilities, social services, and parent involvement, as well as program administration, facilities, and staffing. These items in the OSPRI and SAVl can be useful in answering evaluation questions about program implementation.

Evaluation questions should be answered on an ongoing basis during program operation, not after the program is over. For example, why wait until the end of the year to complete the OSPRI? Wouldn't it be better to devise a method for conducting ongoing program evaluation and develop new strategies to respond to program issues? Ongoing evaluation ensures that information is collected routinely-information that may be difficult to obtain after services have ended. Evaluations that are incorporated as an integral part of ongoing program operations provide optimum benefits to managers, staff and children and families.

Head Start Funding Guidance System (HSFGS). The Head Start Funding Guidance System (HSFGS) provides funding information on individual Head Start programs. HSFGS includes only Head Start funds and not monies from other funding sources. Head Start costs per child can be generated by HSFGS on a State, regional, or national basis. This cost information can be used in an evaluation to assess program cost effectiveness. However, an indepth cost effectiveness analysis of your program is very complicated to do and you should seek assistance from an outside consultant experienced in cost analysis of social service programs.

Evaluation helps create learning communities If Head Start programs are to achieve excellence and provide quality programs, they must respond to the direction of the Advisory Committee on Head Start Quality and Expansion and create learning communities. To become a learning community, Head Start grantees must be able to learn from and about themselves. Program managers must engage in continuous quality improvement. This requires that managers examine what and how they are doing things and search for new ap proaches to maintain the integrity of their commitment to providing comprehensive services for children and families. And well-planned program evaluations will help us to do this.

Head Start National Library Collection | BMCC Home