Are We Doing Good?

March 22, 2011

A team of researchers at the University of Washington Information School are playing an expansive role supporting a coalition of community technology practitioners across Washington state.   Though their group serves a unique function in this coalition, currently as evaluators of a statewide Broadband Technology Opporutunities Program grant, they are very much embedded actors immersed in the coalition’s work.  Their relationship with Washington state’s BTOP grantees predates the BTOP program.  Previous evaluation efforts included the state funded CTOP program and an even earlier project supported by the Gates Foundation.  Through these funding cycles, the UW team has played an important role, not as evaluators tacked on to the end of a project, but as integral players in project development.  In the project design phase the UW eval team supports grantees’ articulation of their goals and of appropriate benchmarks to support those goals.  Further, the evaluation team helps grantees to discover unintended outcomes as the their projects move forward.  Regular reports through the life of the grant help grantees to take corrective action to keep their projects in line with the needs of their patrons.  Finally, the evaluation team supports the field of community technology by sharing knowledge of goals, benchmarks and outcomes among grantees as well as with funders and policy makers.

DD:  Let’s start by introducing yourselves.

MC: OK.  This is Mike Crandall, I’m the chairman of the Master’s of Science and Information Management program at the University of Washington Information School, and have been involved in looking libraries, how they’re interacting with their communities and at the impact of communities’ technology from local Washington State to national as well as some international projects.

SB: I’m Samantha Becker.  I’m the research project manager on the U.S. Impact Study, which is kind of our umbrella name for a number of public access technology projects that Mike and I have been engaged in for the past three years, including the CTOP grant program that was kind of the predecessor to this Washington BTOP program that we’re doing the evaluation on now.  And I used to be a public librarian in Vermont before I moved out here to Seattle.

DD:  Can you give a little bit of background about the kinds of projects that your team is involved in evaluating?

MC: Sure.  Sam mentioned we started several years ago with a project in the state of Washington which focussed on community technology centers and the impact of those centers on the populations they served and on their communities. That grew into a state-funded effort that was called the Community Technology Opportunity Program.  CTOP was the first round of evaluation that we did with a statewide activity.  Following that, we ended up with a cooperative agreement with the Institute for Museum and Library Services to look at the impact of community technology in public libraries across the United States. The results were published last March, and are basically focussed on the ways that people use technology within public libraries, and how that intersects with policy areas ranging from workforce to health to education– all the usual suspects in that area.  More recently, we’re also involved in a project with the state museum and library services which is helping to build out a framework for digital inclusion- digital community inclusion- which is really trying to understand the principles behind how to think about evaluating and understanding the intersection of technology in the community with the community itself.

SB: We’re also working with the Knight Foundation right now to do an evaluation of their new public library giving initiative.  Knight gifted grants to libraries that were designated for public access technology.  On a broader level we’re also working to develop a public access technology evaluation tool, a web survey tool for public libraries to use to do their own evaluation of their public access technology, and we expect that to be available to public libraries to use sometime in the summer of 2011.  So that’s kind of the central scheme of the work that we’re doing, it’s all around the public access.

MC: We’re evaluating a Washington state grant under the Broadband Technology Opportunity Program, which involves a number of community technology centers, libraries, and other community access points for digital information around the state.

DD:  Where you do you see the digital inclusion needs of Washington State at the present?

MC: Washington State is not unique.  It’s like most other states in the United States.  There’s a broad range in access in the population.  We have a fairly large migrant worker population here because of the agriculture industry in Washington.  That population is certainly one that has limited access to the services that many of us take for granted.

There also is emerging a fairly deep divide in the justice system in terms of access to online services within the legal system.  There’s been a big effort in Washington State in the last eight or nine years to identify how to approach that and what to do with it. That’s one of the reasons that these centers are being included in this grant, to sort of help further solutions in that area.

The CTCs themselves are, as always, strapped for cash so their resources are limited.  The BTOP grant in particular is going to be extremely helpful to them, not only to build additional services but to put in place some capacity to help with the existing services.  I think there’s some major impacts that the grant will have in support of that access.

SB: For the providers of public [internet] access having sufficient equipment available to meet the demand of people who need public access is a current challenge as well as providing help in training.  One of the big challenges for a lot of providers is that they don’t have resources for staffing to provide one-on-one help for people.

In terms of the digital inclusion needs of the Washington state residents the challenges are issues of affordability and the access.  For the most part, most people in the country have that ability but there are still some areas that don’t have broadband access.  For some people it’s just unaffordable.  And then there are the other barriers for people which is around digital literacy; literacy in adoption issues, like whether or not it’s relevant, or understanding whether it’s relevant.  For new users there’s both digital literacy and also concerns about privacy and security for them.  So there’s a need for kind of a public education and public support for adoption and digital literacy and then there’s another need that’s around access and affordability.

DD: For the BTOP program you’re evaluating, how similar or different from each other are the organizations that you are looking at?

MC: It’s pretty broad.  It includes libraries but many many other kinds of community organizations.  A lot of CTC’s [Community Technology Centers].   We have several courthouse and justice centers that are participating.  Some of those are actually building, for the first time, an access point into the justice system in their services as part of the grant.  The range of the CTCs in this realm is from the very small to the fairly large, so it’s a pretty broad spectrum.  We also have a tribal library involved.

SB: It’s kind of exciting for us to be working on the BTOP grant because it does involve this mix of different kinds of technology centers.  Most of our previous work has either been public library or community technology center, and this one bridges those so it’ll be interesting and a slightly different type of evaluation too.

DD:  What kind of relationship do you have with the groups you’re evaluating?

MC: Well, there’s just lots of different kinds of groups, so the relationships are different.  With the actual users our primary interaction is through the broad surveys that we do, but also direct interaction through focus groups and interviews that we do with people, so its a mixed way of gathering information that goes right from the personal to the sort of anonymous statistical collection data.  With the agencies that we work with, the ones that are actually delivering the services, we work closely with organizations that are representative of those agencies, and also directly with the agencies, so we kind of have a mixed engagement there as well.  The web survey that we’re doing is targeted directly at libraries, so we interact with those libraries.  The Knight foundation work that Sam mentioned is an interaction both with a funder and with the recipients of that funding, so we’re getting both sides of the picture there as well.  And then in larger national efforts we work very closely with organizations like the Chief Officers of the State Library agencies, the Institute of Museum and Library Services, The Urban Library Council, organizations that are umbrella organizations for the populations that are actually providing the services.  In Washington State we also work very closely with a loosely federated organization of community technology centers which was the coalition that put together the BTOP application and got it approved.  There’s a wide range of interaction there I guess.

DD:  When did you start working with the team that put the BTOP grant together?

MC: That was started probably a year ago, maybe even a little bit more.  Again, it was an outgrowth of the work that was done through the state effort, the Community Technology Opportunity Program that was funded by the state, and it was clear that this coalition of technology centers in the state were ready to move on to another opportunity. We basically worked as an organization within the state to bring them together and develop a proposal that included the evaluation part- which was what we had been focussed on.  Other people were obviously involved in bringing together the technology side, and some of the other pieces that needed to be engaged, including the sustainability.  So it was really a cooperative effort, and very much driven from the bottom up, as you might expect in something like this.

DD:  I don’t know if everyone would be thinking about evaluation even as they’re determining how to work together.  Is that-

MC: I think that’s because we had been involved early on as part of this coalition, and the results that we had gotten back were actually instrumental in helping get the original funding for the CTOP program, so there was clear evidence that the evaluation was a useful addition to the package, and people that were involved recognized that.

DD:  And are you going to be working with the BTOP grantees through the duration of the grant?

SB: Yeah, we’ll be doing quarterly reporting on the grantees’ progress through the two year grant.

DD:  It sounds like you have some mechanisms for reporting back to the community of grantees that you developed in the prior work you’ve done with the Washington State digital inclusion community.  Who are the specific beneficiaries of the quarterly reports you are doing as part of the BTOP evaluation?

SB: We certainly want to let people know what patrons are accomplishing in those community technology centers and that’s a lot of our orientation towards the evaluation.  The report on the CTOP activities was eventually included in the legislative briefing.   I’m not sure how that’s going to work with this round because its a federal grant.

MC: I don’t know, Dharma, if you’re aware, but we actually put together the results of our initial work in this [Washington] state into a  bookDigital Inclusion: Measuring the Impact of Information and Community Technology which has been published and is available publicly now, so that’s something that hopefully will let people learn something about this area.

DD:  Do I know about your book?  My questions may belie this, but I’ve read it!  I have a copy. [laughter]

It sounds like there’s a very iterative process to your evaluation.  You’re gathering information both from the grantees and from their users through multiple methods of inquiry and that knowledge is reported back to the grantees quarterly ?

SB: For this BTOP grant we’re relying on the providers to interface with the users and give us that information.  The way that we approach this evaluation is we’re looking for the providers, the grantees, to tell us what they plan to accomplish with their grant funding.  So their grant applications outlined their expectations for the kind of changes that will occur in their patrons because of being able to use this technology, or because of getting training.  We reviewed their grant applications and we extracted from those their specific goals.  We turned those goals into benchmarks or indicators that can measure their progress towards their goals.  So for this grant we’re very much relying on the grantees to identify the way that technology is used and what they want to accomplish with their users.  And then we support them developing some evaluation markers so that we can evaluate how well they’re doing towards meeting their goals.

We found with CTOP that many of the grantees identified certain goals, very specific goals about what they wanted to accomplish, but in the course of the evaluation they discovered that they were accomplishing many other things as well.  For example, one of the CTOP grantees was really focussed on youth issues but also discovered they were helping with employment preparation and employment job skills training.  That wasn’t a specific goal of their grant, but it was a  valuable outcome.  So the way that we’re  trying to capture that in this grant is by sharing the goals between the different providers.   So if they are accomplishing things they didn’t expect, we’re capturing that and kind of giving them credit for going beyond the goals that they designated upfront.

DD:  So you’re sharing the goals and benchmarks among the umbrella group of BTOP grantees.   Is there still some privacy between each individual grantee about how they’re doing on their benchmarks?

MC: It’s not so much privacy.  Each project has their own specific outcomes that they’re looking for and we want to make sure we capture those and report those back to the individual groups.  I think what Sam was pointing out is that they may not have thought about some unanticipated outcomes that, in fact, they are producing from their work and we want to make sure that those are available to them as well, so they can report on those if they happen to be there.  So what we do is we look at all of the projects, and gather from each of the projects what they anticipate their results will be, and then roll that into a single survey that’s used for all of the projects at once, which they fill out according to their particular anticipation of what they’re going to be doing.  And more.  Its the more that’s the interesting part.

SB: Yeah.

DD:  I was imagining that one of the challenges of this kind of group evaluation process might be that people might perceive it as risky. Their organization might be outed for not being up to snuff.  There might be some fear around that, I think that’s what was behind that question.

MC: Yeah.

SB: Hmm.  Interesting.

MC: We haven’t seen that in the work that we’ve done so far.  I think they’re actually appreciative of the fact that they’re getting input on how they’re doing, and they use that to help steer their direction.  With a two year grant, there’s actually quite a bit of opportunity for that.

SB: Yeah, yeah that’s an interesting question, Dharma, mostly because I haven’t heard it from any of the groups that we’ve worked with so far in terms of feeling anxious about that.  I think, you know in the CTOP grant evaluation that we had-  how many did we have, eleven or twelve-

MC:  Yeah, it was twelve.

SB: -organizations, and we reported what they did and the progress towards their goals.   They were all so different in terms of what they were doing and the programs that were being supported by the grant that I don’t think that there was a lot of concern about, “Oh, Goodwill added so many users in this past quarter and one of the other smaller organizations didn’t add as many.”  The scale of the grants and the scale of the program and the specific focus of the programs are so different that I don’t think that competition really exists in terms of outcomes for them.

MC: The other thing to think about here is that for many of these organizations they haven’t really had the capacity to do this kind of work in the past and this is actually giving them a chance to, for the first time, collect information that demonstrates how successful they’ve been.  So it actually is, for them, an eye-opener in many ways and turns out to be quite valuable.  We had a wrap-up meeting at the end of the CTOP grant where we brought all of the grantees together and shared their results, and they were just really excited about it because they were seeing, in aggregate, how much difference they were actually making;  and that’s the first time they’d actually seen that, so I think its very powerful in the sense that when you bring this information together it becomes a sort of motivator, and probably a morale booster, too, because many of them are, you know, getting along with very few resources and doing incredible work, but they don’t really get recognized for that very often.

DD: Let’s talk about the resources that are required for evaluation.  You’re working from this micro to macro scale in Washington State.  Your group has played this role for sort of helping groups to add on an evaluation component that they would not have been able to do.  I think that’s really interesting because, like you said, a lot of groups just don’t have the capacity to take this on.  If they can keep the lights on, they’re doing an amazing feat.

MC: Right.

DD:  Can you talk about what’s really in the reach of maybe a small library or a small community technology center?  What kind of resources do they need to take on evaluation?

SB: That’s a really good question.  Many smaller organizations, regardless of what kind of non-profit field they’re working in have a difficult time with evaluation.  It requires some expertise in terms of understanding what is evaluation and how to set up an evaluation framework. Also a lot of organizations don’t really even think about evaluation until after they’ve done some program and then they’re like, “How do we evaluate what happened here?”  What we’re trying to do is provide some framework and structure at the more macro level to help guide the process of identifying, you know, what are the inputs and the outputs.  We help explain and demonstrate the logic model behind these programs. Also at the macro level, we are trying to make all of our work available and accessible to other groups in the field.

On another level, we’re trying to help these grantees to do their own evaluation by providing some tools for them.   When we’re designing the evaluation projects that we work on we’re always thinking about what the burden is on the reporters. How can we make sure they can actually gather the data that we’re asking for?  The evaluation process needs to be relevant to them. They need to be able to use the results in their local community, those kind of things.  That’s why on our website for the U.S. Impact Study umbrella we provide a lot of the background material that we’ve developed.  We even have advocacy tools for libraries to use to talk about public access technology.

The web survey project that we’re working on is our direct attempt to make evaluation available to libraries of all different sizes.  Recognizing that they don’t have the staff to conduct surveys of patrons and they might have have questions of privacy, or concerns about interacting with their patrons in certain ways, the survey requires a very low level of effort from the libraries. The vision of the web survey project is that all they have to do is link their patrons to our tool, but after that we are really doing the heavy lifting. We gather the data and turn that into reports that they can use in their community.

DD:  Is there any kind of magic number in terms of resources, is there x-percent of a digital inclusion budget that should be earmarked for evaluation?

MC: Uh…fifty percent. [all laugh] I don’t think there’s a magic number, and it really depends upon, you know, again, what you’re trying to evaluate, so it could be quite a bit of money and it could be done on a fairly low budget.  Actually the money that we’re spending for the technology evaluation for the BTOP grant is not a huge amount but because we had something to work from, you know, we could build on that, rather than having to start from scratch, which obviously takes more time and money.  As far as the actual burden on the individual organizations, that’s something that is an issue that has to be thought through in terms of how much they can actually do.  Since we’re doing this from an umbrella perspective we can provide them tools which they might have to develop on their own otherwise which would obviously take a long time.  So, again, aggregating the evaluation tends to help with the costs and make it little less burdensome on the individual organizations both in terms of processing and investment of time and energy.  But I don’ know if there’s a number that you just come up with…

DD:  So, it seems you’re both prodding communities to take on evaluation themselves and that you are helping to break the evaluation process down in  logical ways.  But there is probably also a relative accessibility right now that the groups that you’re able to work with in Washington State have within the scale of the BTOP grant.  They have direct access to you and other kinds of expertise that they need to be able to accomplish good evaluation.

MC: Yeah, I think that’s a really good point.

DD:  Well, is there a magic scale, or a magic scope?  It seems you gotten a fairly tight ecology that’s growing around your evaluation as integrated into Washington State.  Is that partly because this is a good scale for the kind of evaluation your doing, or is that a dumb question?

MC: No, I think that’s actually a great question.  It’s really interesting now that we’re working with the Knight foundation we can actually take this to a community level and see what works at a community level which we did not have the opportunity to do before.  But from the work we’ve done so far, I think the state level makes a lot of sense in terms of the policy arena.  Just because that’s where a lot of these things are decided in terms of where resources are allocated across the state, how the organizations that are providing the services are treated in the state, so if you do want to make a difference in that area then you pretty much have to do it at the state level to really show something.  I’m sure there are opportunities at the community level that could be leveraged in the same way, but we haven’t actually worked in that space to this point.

It’s been really wonderful to be able to work at so many different levels with this, from the local community to the international spectrum, because there’s so much that crosses over, and it really does help to sort of put things in perspective and realize that every community is different but there are some commonalities that we’re trying to sort of understand in order to be able to take advantage of some of these efforts that are going on in this area.

SB: From my perspective, our involvement in this work is, like Mike said, from these very small community technology centers to the international, but also, we’re talking to the individual users.  So we have that perspective of the individual patrons.  I feel really privileged to have this entire range of perspective on the use of public access technology, from the individual users to the librarians and providers that work in those, to the organizational efforts and then to the national policy level and the involvement of foundations in philanthropy and that effort as well.

DD:  One thing that comes to my mind as I’m listening to you talk and also reading about your work, is that it seems like what you are creating around community technology evaluation in Washington state is not unlike the agricultural research model that’s been around for quite a long time.   The agricultural extension service, where there’s a really specialized research community that integrates knowledge from individual farmers and what they’re doing on the ground to other kinds of expertise that might be at a university and so on, and it becomes this kind of corpus calosum in a really intentional way.  Ag service operates on many different scales, you know, with regional experts, county by county experts, up to the U.S.D.A.  Do you think that’s a fair comparison?  Did you consciously model the work that you’re doing on any other kinds of policy-oriented research?

SB: That’s like a totally brilliant idea, I love it, we didn’t think of that, and I love that idea.  We model our evaluation more from the public policy side of things, so how a public policy happens, and research for policy purposes, which is a more purposeful type of research than academic research.  We’re asking directly what’s the outcome of certain types of action are.  We take that public policy perspective very intentionally, so that what we’re doing has purpose and use outside of where ever we’re working.  Whether we’re working in libraries, in CTCs, or in communities we want the product of our work to resonate with a larger community and resonate with policy-makers who are ultimately making funding decisions. At the core of evaluation is the big question  “Is this worth spending money on?”  That question is answered through evaluation.

MC: I will say though, Dharma, that even in Washington State, for the Community Technology Opportunity Program the partner that we worked with was the Washington State University Agricultural Extension Agency-

SB: Right.

MC: -and they saw how this fit with their mission very clearly and were very interested and are still quite engaged and interested in pursuing it further, so I think that your analogy is very apt, and actually could be a really interesting partnership across the country.

SB: Mm-hm.

DD:  Well, I think community technology is, like farming, a knowledge-heavy subject.  It’s not just applied policy but also applied technology.  It’s especially knowledge-heavy as applied to, say, a community’s economy.

SB: Mm-hm.

MC: It is, yeah.

DD:  Your evaluation team was written into the CTOP statute by Washington state.

MC: Yeah.

DD:  How did that happen and why did you think it necessary?

MC: Well, we kind of modeled that after several other states, California being one of them, that had used the legislative process to surface the activities that were going on in the community technology center world to a level where there’s actually recognition within a statute that this is a legitimate activity which the state should be counting as something that they recognize as a legal entity within the state.  That opens up doors for funding which, until you get that status, you don’t have.  So that was the target of the law.  That’s why we tried to get that sort of wording put into the law and have that become part of the overall package that went through the legislature.  That turned out to be quite important.  Its still not done because you have to pay attention to these things or they go away quickly, but even making that step of making it visible was a big part of what the coalition as a whole was trying to accomplish when starting out on the project.

DD:  As connoisseurs of evaluation, are there any common mistakes that people make that you could steer somebody away from?

SB: Yeah.  I would say the biggest mistake that is made is not thinking about evaluation at the program onset.  Evaluation is much more effective if the principals of the project have a clear idea, or some idea, upfront about how they think their project is going to produce some kind of change. From there a program evaluation can flow in a much better, much stronger way than waiting until after you’ve implemented your programs and then say, ‘Oh, what happened?’  Then you have to backwards engineer an evaluation framework around what’s already happened instead of thinking through ahead of time what you think is going to happen.

MC: Yeah, I agree with that, and I also think planning evaluation at the onset helps to set scope in a way that makes it much more realistic to accomplish what you’re trying to do.

SB: Mm-hm.

MC: Oftentimes people have big ideas but they don’t really think through what they actually need.  Having a framework that you can work from at the onset helps you to focus in on what’s really important at the beginning.  So you actually have what’s important when you come out at the end.

DD:  If you were going to hire somebody to do a evaluation for a BTOP project are there any particular things you would be looking for in terms of skills, abilities, talents, things to avoid?

MC: I think familiarity with the territory is important.  You would certainly want to see some evidence that they had worked in this area before and understood the kinds of things that were important within the digital inclusion community and what sorts of things the policy arena cares about in this area.  And that could be evidenced by prior work, or reputation, or  whatever you want to use to demonstrate that, but that seems like a really critical piece to bring in to a selection process.  Probably an ability to work with varied constituencies;  that you don’t just work with one type of organization or one sector, that you have a broad, again, perspective on how these different sectors and different organizations fit with the things that are going on to achieve the outcomes you’re looking for.  Sam, more than that?

SB: Yeah, I think really having somebody that understands the sector and the kind of work that is going on here is probably the most important thing.  It’s hard to construct an evaluation framework if you don’t really understand what people are trying to accomplish.  That’s especially true if there hasn’t been a lot of work done upfront by the people who are evaluating about their expectations from the program.  I’m not trying to sell our services, [laughs] but I do want to say that for the BTOP grantees that are out there right now, the states that got BTOP grants for libraries in the past few months, the web survey tool that we’re developing can be at least one part of an evaluation  process that libraries can use in conjunction with the BTOP grant.  And that’s something that we’re developing that libraries can choose to use or not, and will be very accessible to them and not have much costs associated if anything.

DD:  That was a totally appropriate plug.  What I had in mind was a conversation that I had with a city official who is a member of NATOA where they’d hired a consultant to come in to do their evaluation, and they spent more time fighting with the evaluator to get the evaluator to understand what their goals were-

MC: Mm.  Yeah.

DD:  -you know, sometimes the wrong help can be worse than no help at all.

MC: Yeah.

DD: Is there anything that you think we haven’t covered that would be good potentially for the community of BTOP grantees to know about evaluation?

MC:I think the big thing to take away is that evaluation is actually helpful.  It gives you really good information that you can use to advance your agenda.  So if you do it well then you get results that are quite impressive and you can take those results and use them as leverage for future efforts.  So, that’s it.  That’s a huge bonus of doing evaluation.  A lot of people look at it as an unnecessary expense but its a huge opportunity to demonstrate that what you’re doing really is valuable and you can take it forward in different ways.

SB:  Evaluation serves two purposes.  It has both a performance aspect to it and it has a communications aspect to it.  It serves the purpose of informing by collecting real data about how important the services are.  It also serves performance because it provides the grantees with information about their users, helping to shape programmatic decisions.  For example, finding out what their patrons are accomplishing and some of their barriers.  Seeing that across different providers, I think is really helpful to them to target their services and improve their performance.

DD:  Are there any specific resources that you want to give a shout out to?  Anything particularly useful to BTOP grantees?

SB: I would highly highly recommend any organization that is interested in doing evaluation that they look at the work of Harry Hatry at The Urban Institute; he has one excellent book, Performance Measurement: Getting Results, which very nicely lays out kind of a how and a why and a process for doing evaluation.  Also online resources are available through The Center for What Works which is an offshoot of the Urban Institute. They have a lot of resources on evaluation there as well as some really nicely produced guidance on setting up your own evaluation process.  We love Harry Hatry.

DD:  All right.  Anything else?

MC: Can’t think of anything.  You’ve covered a lot of territory.

DD:  You’ve covered a lot of territory.  Thank you so much for making time.

Tags:

Evaluation, Policy, Social Impact, Systems