Tag Archives: assessment

Next speaking engagement: Upstate New York SLA

4 Apr

I really thought I’d have time to write up that reflective post on CiL 2011, but apparently time did not stop while I was in DC. Terribly inconsiderate, and it made for a crazy week.

In any case, I’ve got another opportunity to do the public speaking thing this Friday, April 8, at the UNYSLA‘s “Toot Your Own Horn: Measuring & Meeting Your Objectives” event. Here’s the blurb for my bit:

Plural of Anecdote: Assessing the Success of a Digital Repository

Anyone who’s taken a stats class — and plenty of other folks besides — knows the danger of relying on unsupported anecdotal evidence. Yet the data available to us through our myriad assessment tools often proves ineffective or disconnected without the context provided by a strong narrative. This session will discuss how the Web & Digital Projects Group at Cornell University’s Catherwood Library seeks to find a balance, using stories and data analysis to not only assess the success of DigitalCommons@ILR and their other projects, but also define what success means for those projects.

Honestly, you should just click through and read the description of the whole event: listening to Jill Hurst-Wahl speak is always worth it, and while I’m not as familiar with his work it sounds like Sean Branagan should bring a lot to the table as well.

So if you can make the trip, I’m betting it’ll be worth your while to do so. Hope to see you there!

Axioms of assessment

20 Apr

As part of my presentation at CiL2010, I posited that the following relationships often govern how the assessment of patron attitudes are interpreted by libraries. I thought it’d be fun to share it here as well, if only because I’m inordinately proud of locating a font which makes the “proportional to” symbol look good.

This translates as:

“The need to listen to our patrons is directly proportional to how much they agree with what we’re already doing. The need to educate our patrons is inversely proportional to how much they agree with what we’re already doing.”

Sad to say, I’ve been guilty of this myself more than a little. It’s an easy but pernicious attitude to adopt.

A query on the assessment of digital collections

6 Apr

Happy Tuesday, loyal readers! I’m here to pick your brains.

What kind of resources or case studies have y’all heard about for the assessment of digital collections from a user-focused point of view? Not a “how many hits we get” or a “how friendly is our interface” kind of thing, but a “this collection has few users, but it’s vital to them, while this one has a lot of hits but doesn’t tend to get repeat customers” kind of thing.

This was prompted by a visit from a colleague who posed the question in the context of digital image collections, but as I’m pondering it and asking around I start to wonder if  it’s exposing a gap in the assessment of digital resources.

How have you folks seen this question addressed?