Usable Innovations
Usable Innovations are operationalized so they are teachable, learnable, doable, assessable, and scalable in practice. Usable Innovations are effective when used as intended. Usable Innovations have a way to assess the presence and strength of the innovation as it is used in everyday practice.
Innovations have been the focus of implementation efforts in all fields of endeavor in human services, social sciences, agriculture, business, computing, engineering, manufacturing, and so on. Standard practices are what we do every day; innovations are something new. Innovations are deviations from standard practice. Klein and Sorra (1996) define an innovation as “a technology or practice that an organization is using for the first time, regardless of whether other organizations have previously used the technology or practice.” Nord and Tucker (1987) and Rogers (1995) have offered similar definitions.
Active Implementation is focused on realizing socially significant outcomes where populations benefit from high fidelity use of an innovation. Thus, it is not sufficient to somehow use an innovation once or a few times. To produce benefits at scale, Usable Innovations are operationalized so they are teachable, learnable, doable, assessable, and scalable in practice by generations of practitioners. The lack of adequately defined programs is an impediment to implementation with good outcomes (e.g. Michie and colleagues, 2005; 2009). To begin to address this issue the following criteria are used to define a Usable Innovation (Fixsen, Blase, Metz, & Van Dyke, 2013):It all begins with an idea. Maybe you want to launch a business. Maybe you want to turn a hobby into something more. Or maybe you have a creative project to share with the world. Whatever it is, the way you tell your story online can make all the difference.
Clear description of the innovation
Clear Philosophy, Values, and Principles
The philosophy, values, and principles that underlie the innovation provide guidance for all treatment decisions, innovation decisions, and evaluations; and are used to promote consistency, integrity, and sustainable effort across all provider organization units.
Clear inclusion and exclusion criteria that define the population for which the innovation is intended
The criteria define who is most likely to benefit when the innovation is used as intended.
Clear description of the essential functions that define the innovation
Clear description of the features that must be present to say that an innovation exists in a given location (essential functions sometimes are called core intervention components, active ingredients, or practice elements)
Operational definitions of the essential functions of an innovation
Practice profiles describe the core activities that allow an innovation to be teachable, learnable, and doable in practice; and promote consistency across practitioners at the level of actual service delivery
A practical assessment of the performance of practitioners who are using the innovation
The performance (fidelity) assessment relates to the innovation philosophy, values, and principles; essential functions; and core activities specified in the practice profiles; and is practical and can be done repeatedly in the context of typical human service systems.
Evidence that the innovation is effective when used as intended. That is, the performance (fidelity) assessment is highly correlated (e.g. 0.70 or better) with intended outcomes for children, families, individuals, and society.
Typical definitions of “evidence-based innovations” focus on standards for scientific rigor and statistically significant outcomes (e.g. http://www.colorado.edu/cspv/blueprints). Usable Innovations require evidence of the effectiveness of an innovation where high fidelity use of an innovation = good outcomes and low fidelity use = poor outcomes.
Dane & Schneider (1998) and Durlak & DuPre (2008) summarized reviews of over 1,200 outcome studies and found that investigators assessed the presence or strength (fidelity) of the independent variable (the intervention) in about 20% of the studies and only about 5% of the studies used those assessments in analyses of the outcome data. That is, about 5% of the innovations being studied would meet the Usable Innovation criterion that requires “Evidence that the innovation is effective when used as intended.” Without information on the presence and strength of the independent variable, it is difficult to know if the innovation was used as intended and it is difficult to know what produced the outcomes in a study (Dobson & Cook, 1980). Based on these reviews, one might expect about 5% of currently named “evidence-based programs” might meet the criteria for defining a program.
The current standards for “evidence” are useful for choosing innovations (better evidence for effectiveness is a good basis for choosing) but are not especially helpful for implementation of an innovation in typical practice settings. Service delivery practitioners and managers do not use standards for scientific rigor in their interactions with recipients and others. They use innovations in practice and need to know what they are (as defined above).
Download: DevelopingUsableInnovations