Mr. Chairman, my name is David Muhlhausen. I am a policy analyst at the Heritage Foundation Center for Data Analysis specializing in crime policy and program evaluation. In beginning my testimony I must emphasize that the views I express are entirely my own, and should not be construed as representing any official position of The Heritage Foundation. With that understanding, I am honored to be asked by the Subcommittee on Crime, to testify today on reforming the evaluation process at the Office of Justice Programs (OJP).
EVALUATIONS OF CRIME-PREVENTION PROGRAMS
In 1996, Congress directed the U.S. Attorney General to conduct a review of state and local crime-prevention programs funded by the U.S. Department of Justice (DOJ). The resulting 1997 report by the University of Maryland looked at 500 evaluations of the crime-prevention programs. While the study did not evaluate specific programs, it reviewed scientific studies of programs and judged them on their scientific merit. Congress can use this report as a starting point for identifying effective and ineffective programs.
What Works
Given my time constraints, I will concentrate on the 1997 report's findings on what works in policing. Policing activities with clear strategies of targeting crime-risk factors are effective in reducing crime. Some effective strategies include: (1) targeting crime "hot spots," (2) targeting illegal possession of firearms by criminals, and (3) the proactive targeting of repeat offenders, which increases the likelihood of the arrest and incarceration of dangerous criminals. When the police develop clear strategies, they can make a difference in reducing crime.
The 1997 report suggested that problem-oriented policing is a promising approach. Since the report was published, new evaluations, sponsored by the National Institute of Justice (NIJ), have become available which indicate that some types of problem-oriented policing are effective in reducing crime. A 1999 randomized study found that where specific plans developed to reduce crime in Jersey City, such as aggressive order maintenance and changes to the physical environment, produced significant reductions in crime. Another study, published in 2001, found that Boston's Operation Ceasefire led to a dramatic drop in the number of the city's youth homicides. Operation Ceasefire successfully reduced youth homicides by targeting a small number of chronically offending youth gang members.
What we have learned from problem-oriented policing and other policing strategies is that local law enforcement can make a difference. Developing a clear plan for using local resources to solve problems is more effective than having local law enforcement agencies spend federal dollars.
What Doesn't Work
The 1997 report concluded that neighborhood watches where volunteers watch their neighborhoods in an effort to deter criminals are ineffective. In addition, community policing with no clear strategy for targeting crime-risk factors has been ineffective in reducing crime. While the federal government has encouraged community policing the report states, "there is no evidence that community policing per se reduces crime without a clear focus on a crime risk factor objective."
What's Unknown
The 1997 report noted that many of DOJ's crime-prevention programs either were evaluated as ineffective or escaped scrutiny altogether. It added: "By scientific standards, there are very few 'programs of proven effectiveness.'" The 1997 report called for Congress to devote more resources to evaluating crime prevention programs. Yet Congress still has not given sufficient attention to this request to ensure that federally funded crime prevention efforts are in fact preventing crime. It is still the case that our understanding of which OJP programs work can be significantly increased through the use of evaluation research.
THE HERITAGE FOUNDATION'S RELATED RESEARCH
The Heritage Foundation has recently begun to evaluate the effectiveness of federal programs. While The Heritage Foundation has not individually studied OJP grants, its evaluation of the Office of Community Oriented Policing Services (COPS) has shed light on the program's success. Some observers claim that the COPS program is a proven success because crime has declined every year since the program's creation. This assertion does not account for the fact that the nation's violent crime rate began to decline before the program was created.
In May 2001, The Heritage Foundation published an impact evaluation of COPS, which found that grants to hire additional officers and purchase technology were ineffective in reducing violent crime. The analysis suggests that simply continuing funding for the COPS program will be ineffective in reducing violent crime.
In contrast to hiring and redeployment grants, which were not shown to be effective, the analysis found that COPS grants which were targeted on reducing specific problems-like domestic violence, youth firearm violence, and gangs-were somewhat effective in reducing violent crime. Narrowly focused COPS grants are intended to help law enforcement agencies tackle specific problems, while COPS hiring and redeployment grants are intended simply to pay for operational costs of police departments. The Heritage Foundation analysis builds on research that shows how the police are deployed is more important in reducing crime than the number of officers funded.
RESEARCH BY THE UNIVERSITY OF NEBRASKA
Approximately six months after the publication of The Heritage Foundation COPS evaluation, the University of Nebraska at Omaha published a federally funded evaluation of COPS. The University of Nebraska report was financed through a COPS office grant of over $116,000.
The University of Nebraska study found that two types of COPS grants-hiring and narrowly focused grants--reduced crime rates in cities with populations over 10,000. The study also found that redeployment grants failed to reduce crime. In addition, for cities between 1,000 and 10,000 residents, the study shows that COPS hiring grants are associated with an increase in violent and property crime while redeployment grants are associated with an increase property crime. The results of the COPS-funded study have been used to support claims about the program's effectiveness.
The University of Nebraska study was critical of research that did not "control for extraneous factors that may be correlated with both increases in the number of police officers and increases in crime rates, such as local politics, or fluctuation in the local economy of cities." Unfortunately, data limitations did not permit the authors to make significant improvements to the existing research. For example, city-level data for five out of six socioeconomic variables in the study was not available on a yearly basis. Instead of using data for each year between 1994 and 1999, the following control variables were held constant at 1990 levels: minority population percent, single parent household percent, young people percent, homeownership percent, and percent of people in the same house since 1985.
Given that the University of Nebraska study covers the period 1994 to 1999, the use of data exclusively from 1990 for most of their control variables is inappropriate and likely to reduce the validity of the findings. Holding control variables constant at 1990 levels ignores important changes that occurred on a yearly basis between 1994 to 1999. For example, from 1990 to 1999, the nation's minority population grew from 24.3 percent to 28.1 percent of the total population. The University of Nebraska study's use of 1990 data means that it cannot account for many of the important changes during the last decade that influenced crime rates.
Perhaps the most surprising aspect of the University of Nebraska analysis is that state and local law enforcement efforts are assumed not to influence crime rates. The statistical model used by the researchers only considers the effect that federal funding has on crime rates. The impact of omitting state and local expenditures can be seen by examining the size of the COPS program in comparison to state and local police expenditures. During the period of 1994-1999, the COPS program had a combined budget of $6.9 billion, while during the same period state and local governments devoted over $280 billion for police agencies. For every $1 spent on COPS, over $40 was spent by state and local governments for police protection.
An alternative approach can be found in The Heritage Foundation study where the statistical model accounts for state and local policing. In order to take account of the state and local expenditures, The Heritage Foundation used county-level data, which has more complete information on local spending (and important socioeconomic factors that are available on a yearly basis). The Heritage Foundation study found that state and local police expenditures significantly reduce crime. The approach taken in the University of Nebraska study tends to bias the results towards a finding that COPS is more effective than the program may be.
WHAT CONGRESS SHOULD DO
Advancing the evaluation capability of OJP is important to the promotion of public safety. Congress should take the following steps to improve the evaluation of OJP programs: (1) mandate impact evaluations, (2) require grant recipients to collect data and evaluate their programs, (3) make NIJ an independent agency within OJP, (4) reserve 10 percent of all OJP grant funding for impact evaluations and have the agency review the research design before approving grants. These steps are explained more fully in the sections below.
Mandate Impact Evaluations
If Congress wants OJP to evaluate the effectiveness of its programs, it will have to mandate it. There is no substitute for Congress making its intentions clear. Congress must specifically direct OJP to measure the effect of its programs on crime.
Too frequently, process evaluations, which answer questions about the operation of a program and service delivery, are substituted for impact evaluations. Process measures that report how much funding was dispersed and how many people were served are not measures of a program's effectiveness in improving the targeted social condition.
A case in point is the Violent Crime Control and Law Enforcement Act of 1994 which required an evaluation of the COPS program. The law suggested that the effectiveness of COPS in reducing crime should be evaluated, but the law left open the possibility of the Department of Justice not doing an impact evaluation. The resulting Nation Evaluation of the COPS Program failed to determine the program's effectiveness in reducing crime. Instead, the study looked at process measures, such as how many officers were hired. Some of the study's findings were informative. For example, the study concluded that a program goal of adding 100,000 additional officers would not be met. However important questions about the program's effectiveness were never even considered.
The National Evaluation of the COPS Program and the University of Nebraska studies illustrate a larger problem with bureaucracies. In general, given the opportunity, bureaucracies will emphasize those aspects of administrative operations that put them in the best light. In cases where they are forced into measuring their effectiveness, bureaucracies will tend to conduct process evaluations or studies designed to produce the most favorable results. From an administrator's perspective, process data are the most readily available type of information about a program, so it receives the closest attention.
To counteract the natural tendency to avoid impact evaluations by government agencies, Congress should clearly mandate OJP to evaluate the impact of its programs on crime rates. An example of this type of legislative language mandating an impact evaluation is contained in the Coats Human Services Amendments of 1998. The amendment specifically mandates a randomized impact evaluation of Head Start. Today, the Department of Health and Human Services (HHS) is moving toward determining if Head Start is an effective program based on rigorous social science methods. Without the congressional mandate, it is very likely that the HHS research would be more process-oriented rather than focused on the impact of Head Start.
Require Grant Recipients to Evaluate Their Federally Funded Programs
Recipients of OJP grants should be required to demonstrate through scientific means the effect that the programs have had on crime. Anecdotal examples or measures other than actual changes in crime should not be substituted for rigorous impact evaluations that include control variables.
First, before grants are awarded, applicants need to develop a clear plan on how they intend to use the funds to prevent crime. Second, a system to measure and evaluate the effectiveness of the grants must be in place before the awarding of funds. Third, after the funds have been spent, the OJP-funded activities should be evaluated for their effect on crime. Finally, the results of the evaluation should be submitted to OJP for dissemination to Congress and the public.
To summarize these steps: Devise a plan that includes measuring the outcomes of the plan. Implement the plan. Then evaluate the program. Plan. Implement. Evaluate. If grantees cannot take these responsible steps, then they should not receive federal funding.
Make NIJ an Independent Agency Within OJP
NIJ is uniquely situated to be the impact evaluation arm of the Justice Department. To become an independent and truly effective agency, NIJ's budget needs to be directly funded. Currently, NIJ's budget is derived from contributions from its sister bureaus within OJP. NIJ's ability to objectively evaluate OJP programs is seriously jeopardized, because NIJ could suffer budget retaliations if the agency's findings are not favorable to its sister agencies. Direct funding of NIJ will help inoculate it from pressure not to evaluate the effectiveness of OJP programs.
Reserve 10 Percent of All OJP Funding for Impact Evaluations
As originally proposed by the University of Maryland report, a minimum of 10 percent of OJP grant funding should be earmarked for impact evaluations. The implementation of these impact evaluations should be done through a mix of in-house NIJ studies and out-source grants to academic researchers and independent research firms.
Determining the impact of a program requires a rigorous study design. The net outcomes of a program can be determined when the conditions of the intervention group are compared to a similar group that has not received the intervention. Studies based on experimental design, or random assignment, are preferred because their results are less ambiguous.
Because the criminal justice system operates in the context of legal constraints - namely, individual rights and due process - true random experiments are frequently impossible. In these cases, quasi-experimental designs are required, where the intervention and control groups are selected nonrandomly, but some controls are used to minimize threats to the validity of the findings. The inclusion of proper control variables is crucial to the validity of findings of studies that are not based on random assignment.
CONCLUSION
Impact evaluations offer significant benefits for society because they measure how programs effect the social conditions they are designed to improve. Impact evaluations offer two improvements over the process evaluations that agencies typically produce. First, impact evaluations reduce uncertainty in deciding which programs should be funded. Funding only effective programs will save taxpayer funds by freeing up resources for programs that actually work. If an OJP program is found to be ineffective, then its elimination has only a limited effect on the intended beneficiaries, because the program failed to reduce crime. An ineffective crime reduction program does not make your constituents safer. In fact, continuing an ineffective program can harm grant recipients because their continued participation wastes time and resources that could be better spent elsewhere.
Second, impact evaluations can improve the quality of public debate about the factors that are responsible for various social problems. Too often when a city receives federal funding and crime simultaneously declines, it is asserted that the funding caused the decline. Simply observing that the crime rates dropped when federal grants flowed to a particular community does not help us understand the reasons why crime rates declined. As the Congressional Budget Office has noted, socioeconomic factors need to be considered in understanding why crime rates change.
The Heritage Foundation is a public policy, research, and educational organization recognized as exempt under section 501(c)(3) of the Internal Revenue Code. It is privately supported and receives no funds from any government at any level, nor does it perform any government or other contract work.
The Heritage Foundation is the most broadly supported think tank in the United States. During 2013, it had nearly 600,000 individual, foundation, and corporate supporters representing every state in the U.S. Its 2013 income came from the following sources:
Individuals 80%
Foundations 17%
Corporations 3%
The top five corporate givers provided The Heritage Foundation with 2% of its 2013 income. The Heritage Foundation’s books are audited annually by the national accounting firm of McGladrey, LLP.
Members of The Heritage Foundation staff testify as individuals discussing their own independent research. The views expressed are their own and do not reflect an institutional position for The Heritage Foundation or its board of trustees.