It's always fun to think seriously about success at the start - much more encouraging than thinking in terms of risks and all the things which could go wrong.
So, what could go right, how would we know it had gone right, and which things going right should we focus on?
We've already blogged about one thing we could measure and which could go right - deposit rates into the IR (or possibly subsequent access rates); this is a very measurable element and in fact measures itself.
User satisfaction is another great thing. If we can create a community of happy researchers and academics who are using Mendeley and our project deposit system without problems, and who feel it benefits them in some way, then that's another good success for DURA. To assess if we manage this, we'll need to do some combination of user testing, interviews and surveys (which will give us specific information about how the researchers using our tool feel about it as well as what the experience of using the tool is like) and potentially measure support requests and usage levels, which give us an indirect measure of how well things are going for users, but which could be affected by other factors too. We are already thinking hard about user experience, particularly around setting up the deposit system for the first time for a user, which is where we hit the exciting technical challenges around authentication and authorisation. Getting the setup process right will be key, because without that, no one will make it through to the truly simple day to day operation of deposit, where we hope to have "no UI" because it will automagically happen whilst researchers use Mendeley normally.
Another form of success which we'd love to have is to be playing an active role in a thriving community of people, all contributing to awesome scholarly infrastructure around repositories and access and preservation. Our engagement with SWORD2 and the whole JISC DEPO programme is part of that, and so is our connection to the community of Mendeley users at Cambridge and beyond. This is a bit harder to measure...
So we will probably focus on the first two kinds of success - happy users, and checking the deposit rates. The system design, including the user interfaces but also the overall technology architecture decisions we make, will play a major role in making sure our researchers find the system easy to use and useful; these are things we are working on today. Later on, we'll also have to make sure we have seamless deployment plans and good support systems in place, as well as the processes to make sure we know how satisfied our users are. The deposit rates we'll look at later on, and again the pilot deployment and publicity and so forth will be the big areas which will affect this kind of success.
Having written all that, it's back to the bits of the project which are not all successes - yet. The real challenge in this coming term for me locally is the package of institutional issues around coordinating diverse bits of the university to come together around the project and our forthcoming pilot deployments (of which more soon) - plus our partner companies. These aren't all technical issues - there are policy issues and communication challenges, translating between teams with very different backgrounds and priorities, and of course the inevitably slow progress of other university activities which DURA may depend on later. But that's the fun of a project like DURA, bringing together lots of different things to deliver something new :)
The blog of the JISC funded DURA project, a collaboration between Mendeley, Symplectic and CARET and the Library at the University of Cambridge.
Wednesday, 29 September 2010
What is success?
Wednesday, 15 September 2010
Counting
One of the requirements of the JISCdepo programme is that the respositories engaged in it should have an analytics engine of some sort so that deposit rates can be observed over the course of the projects.
But what does this mean, particularly for DURA?
Firstly - what are we counting? The most obvious thing to count for a project around deposit would be deposits!
Irrefutable proof?
One thing we must remember is that even if we count the deposits going into a respository, and observe an increase during a project to do with deposit, it does not prove that the project made a difference.
We might reasonably expect deposits to be going up anyway, as awareness and interest in respositories and openness increases. And there may be other initiatives underway which could alter the deposit rate within the institution - perhaps general repository publicity, perhaps some particularly large research datasets are stored during the period, perhaps an enthusiastic staff member evangelising effectively. So we can observe changes in deposit rate, but we cannot necessarily draw meaningful conclusions about the effectiveness of projects to increase deposit from them.
So we must consider results of counting deposit with some caution - it is certainly valuable information, but not necessarily a vindication, by itself, of a single project to increase deposit rates.
Counting somewhere else
In DURA we are super lucky though, because all the deposits from our project will come from other places, and those other places are engaged with the project and so we can count things there.
If we're depositing from Mendeley direct to the institutional repository, Mendeley will be counting and we can access that data. If we're depositing from Symplectic to the institutional repository, Symplectic's institutional deployment will be counting, and at least for the case of Cambridge where we'll be doing our test and pilot deployments, we can access that data.
Even better, by counting in Mendeley or in Symplectic, we can tell exactly what submissions come from our project rather than anywhere else, so it's real data which will help us assess the project's success.
What is success anyway?
For the purposes of this discussion, we're only going to consider success in terms of counting things.
We could easily count deposits during the project. We'd end up with some deposit counts from our trial code work, and some deposit counts from our pilot version (coming later).
I think that these counts are useful to the project internally, but less useful to everyone else. They don't really show the meaningful impact of our work because even during the pilot phase, we may still be ironing out bugs and improving the experience. Also, our work aims to make deposit an integrated part of research workflow on an ongoing basis, and people's initial use of our system is more likely to reflect experimentation than an ongoing engagement.
So, real project success in terms of deposit count will need to be monitored after the formal project ends. We are considering that reviewing the deposit count over the 12 months after the end of the project, capturing both a reasonable embedding period and also conveniently a full academic cycle, might be the way to go. We have yet to decide what a good metric for success would be - double the existing annual rate of academic paper deposit? Do chip in with your thoughts in the comments.
(You may spot that we haven't really talked about counting access to papers in the repository. That's a topic for another day, but also a slightly less relevant one for DURA.)
But what does this mean, particularly for DURA?
Firstly - what are we counting? The most obvious thing to count for a project around deposit would be deposits!
Irrefutable proof?
One thing we must remember is that even if we count the deposits going into a respository, and observe an increase during a project to do with deposit, it does not prove that the project made a difference.
We might reasonably expect deposits to be going up anyway, as awareness and interest in respositories and openness increases. And there may be other initiatives underway which could alter the deposit rate within the institution - perhaps general repository publicity, perhaps some particularly large research datasets are stored during the period, perhaps an enthusiastic staff member evangelising effectively. So we can observe changes in deposit rate, but we cannot necessarily draw meaningful conclusions about the effectiveness of projects to increase deposit from them.
So we must consider results of counting deposit with some caution - it is certainly valuable information, but not necessarily a vindication, by itself, of a single project to increase deposit rates.
Counting somewhere else
In DURA we are super lucky though, because all the deposits from our project will come from other places, and those other places are engaged with the project and so we can count things there.
If we're depositing from Mendeley direct to the institutional repository, Mendeley will be counting and we can access that data. If we're depositing from Symplectic to the institutional repository, Symplectic's institutional deployment will be counting, and at least for the case of Cambridge where we'll be doing our test and pilot deployments, we can access that data.
Even better, by counting in Mendeley or in Symplectic, we can tell exactly what submissions come from our project rather than anywhere else, so it's real data which will help us assess the project's success.
What is success anyway?
For the purposes of this discussion, we're only going to consider success in terms of counting things.
We could easily count deposits during the project. We'd end up with some deposit counts from our trial code work, and some deposit counts from our pilot version (coming later).
I think that these counts are useful to the project internally, but less useful to everyone else. They don't really show the meaningful impact of our work because even during the pilot phase, we may still be ironing out bugs and improving the experience. Also, our work aims to make deposit an integrated part of research workflow on an ongoing basis, and people's initial use of our system is more likely to reflect experimentation than an ongoing engagement.
So, real project success in terms of deposit count will need to be monitored after the formal project ends. We are considering that reviewing the deposit count over the 12 months after the end of the project, capturing both a reasonable embedding period and also conveniently a full academic cycle, might be the way to go. We have yet to decide what a good metric for success would be - double the existing annual rate of academic paper deposit? Do chip in with your thoughts in the comments.
(You may spot that we haven't really talked about counting access to papers in the repository. That's a topic for another day, but also a slightly less relevant one for DURA.)
Labels:
analytics,
evaluation,
jiscDEPO,
JISCdura,
repositories
Tuesday, 14 September 2010
Science Online London 2010
Science Online London was a great event - lots of interesting and lively people from a variety of communities, and some really excellent speakers. Martin Rees, Aleks Krotoski, and Evan Harris stood out for me.
The most relevant bits to DURA were to do with repositories. One important rationale for DURA is that integration of deposit with reference management (a normal researcher task) might increase deposit rates without anyone on repository staff needing to chivvy the academics along. Science Online attendees were reminded of the breakdown of costs for repositories, where outreach, acquisition and ingest can be up to 55% of the overall costs. Ouch!
If we can help researchers deposit their works without needing to add to their already busy schedules, this should help deposit rates, and potentially reduce repository costs.
The most relevant bits to DURA were to do with repositories. One important rationale for DURA is that integration of deposit with reference management (a normal researcher task) might increase deposit rates without anyone on repository staff needing to chivvy the academics along. Science Online attendees were reminded of the breakdown of costs for repositories, where outreach, acquisition and ingest can be up to 55% of the overall costs. Ouch!
If we can help researchers deposit their works without needing to add to their already busy schedules, this should help deposit rates, and potentially reduce repository costs.
Wednesday, 1 September 2010
Repositories and reference managers
It's all go out there in the blogosphere on topics relevant to DURA:
- Les Carr #1 - social sharing of bibliographic info with institutional repositories
- Les Carr #2 - more on Mendeley and repositories
- Tony Hirst - who uses Mendeley in your institution?
- Peter Murray-Rust - kicking the tyres when people talk about "open" in this space
- Duncan Hull - how unique are the papers in Mendeley?
Subscribe to:
Posts (Atom)