Thursday, September 26, 2013

Elegy for the Shiny



Matt Reed so often gets it right. His essay on the shininess of the new in this blog entry captures well the incentive to innovate at the expense of closure & completion in higher education administration (and perhaps elsewhere, but HE is my focus).

On the one hand, there is some comfort in knowing that an institution is not alone in difficulty with follow-through.  On the other hand, if we all suffer a similar malady, we should be able to address it with some mindful commitment of resources.  Yet, as noted by several folk with respectable credentials, the incentives tend to run the other way at the administrative level.  We end up feeling like we don’t finish anything.  The launch is all.

The frustration lies in part in the mismatch between the pace of change (slow) in higher education organizations compared to the contemporary demands for change from the social and political environments in which we operate.  In short, large educational organizations still plan years in advance, and commit their resources to longer term projects.  Why?  So we can offer our students an opportunity to plan their trajectory.   

We are also tasked with using the public funds with which we are entrusted by our taxpayers as efficiently as possible to promote educational success in our service area.  Oddly enough, we don’t have climbing walls or any of the other things about which some pundits like to hyperventilate.  We also have no federal loans, and therefore a zero default rate.  It’s a rather different picture here than what one might gather from the media coverage.

Legislatures and think-tanks have the luxury of demanding turns on a dime; however, without the caveat that they also provide a concomitant increase in wherewithal to make nimble change occur, and to keep it going thereafter.  If there is new money available, it frequently is for the program start-up, not for sustainable operations.  As much as we’d like to implement a promising new program, many a time the decision has been to wait until we can support it beyond the initial investment.  When we do launch something new, changes in the world beyond our doors may make the effort obsolete well before implementation is achieved.

There's nothing quite like the feeling of starting something wonderful, and then having the funding disappear.  It is painfully regrettable when new programs with the potential to improve things for our students die on the vine for lack of continuing investment.

Wednesday, July 17, 2013

Wherein I ponder the topic of student surveys...

We do pretty well here at not over-surveying our students.  I have, however, worked at and heard about other places where survey fatigue is not just a theoretical possibility, but an everyday reality. 

Surveys are a quick and relatively inexpensive way to gather a lot of information about our students, their experiences, their perceptions, and their desires.  They can become too much of a good thing, and it's not a bad idea to think about how we might identify other ways to achieve similar data collection.  At the very least we can examine our surveying schedule from a meta-view and determine what would be optimal deployment for the organizational survey resources we simply must use.

My pet peeve: duplicate questions.  If I ask you about data that is not likely to change during your tenure with us, do I really need to confirm and reconfirm that datum every survey?  There are enough things that do change, and that we need to ask about more than once for that reason.

Most surveys are entirely too long.  Some engage too many topics.

We are encouraging more of a point-of-service approach to surveys, in which we gather only very limited, targeted information at the point at which the student is having the experience in question.  It works better with their time commitments, minimizes errors of memory in self-reports, and gives us very quick feedback if something is amiss in our operations.

What we have not done, historically, is centralize oversight of surveying to maximize its efficiency and exert quality control over the instruments that are utilized around the college.  I would be very interested to hear how other organizations manage their survey processes around these considerations, and any others I may have neglected to address.

Tuesday, June 4, 2013

Golden Eggs / ROI



Caution, public policy rant ahead...

In Aesop’s fable about the goose who laid the golden eggs, a domestic bird has the happy ability to lay one golden egg each day.  Greedy humans, however, decide that there must be a supply of gold inside the bird, and kill it, only to find it is quite ordinary within.   

This lesson, that overreaching for riches may end a perfectly good situation, is a truism in our culture.  However, there may be another lesson that Aesop’s prodigious bird can help us learn.

Consider the opposite extreme, if you will.  Motivated by whatever interesting calculus of profitability or efficiency, what happens when the farmer and his wife decide to reduce their feeding of the goose, to cut costs and thereby maximize the return they get from the golden eggs?

This is a current trend in higher education that bears stronger examination.  If we starve the goose beyond its ability to thrive or survive, will it continue to deliver the eggs?  What is the reasonable level of investment in higher education that will result in the desired beneficial outcomes to the individuals involved, and to the larger society?  

It will perhaps be difficult to realize a return on an investment that we have not made in the future prospects of our fellow citizens.

Thursday, April 18, 2013

What to do about the undecided?



There has been a fabulous discussion on LinkedIn (which of course now I can't locate) about the proportion of students at our various institutions who are undeclared or undecided about their education goals when they join us.

One dilemma is what to do about their data for our reporting and analysis of educational intent.  In the current higher education environment at the federal and state level, much of the discussion is about success in the form of degree completion.  This predicates the collection of a stated educational goal, or intent, from the student at their entry so that we can then measure whether or not we supported them in achieving it.

Another issue is the integrity of our data.  We wonder if some students are just being entered as undecided if they don’t immediately self-identify with a specific program or major goal.

Our third challenge is with blanks.  As we say around here, blank is not data.  Was the field skipped?  Was the student undecided?  Is it an error in data retrieval?  Even worse are fields with only a space.  They look like blanks but read like data to our system.

Those are the nerd issues.  Then there are the academic ones.  Should we require the students to make a decision/commitment?  Some research indicates this might help retention.   

What if they are genuinely undecided?  Can we leave a space for that exploration in an environment in which our ability to serve them rests on how many we can get to degree or certificate completion?  What is best for the student in the long and short term?

We are still mulling these questions…comments are welcome.

Monday, April 1, 2013

Decision Agility and the Dream of Big Data



Don’t get me wrong…I love data like only geeks can.  But even I get cross-eyed at some of the proposals going around now about accountability and benchmarks and comparison tools.

Imagine with me, please, what our enterprise data systems would look like if we were able to gather, store, organize, and retrieve all the data that our many analysts inside and outside the academy have suggested.  Never mind the significant state and federal data collection efforts. 

I do dream of huge multidimensional cubes of data waiting to be mined for nuggets.  I salivate like Pavlov’s anticipatory canines at the possibility of predictive modeling using all possible variables.   

Yet if we actually had all that data, would we really be able to use it to make sensible recommendations on a reasonable timeline?

Alas, probably not.  In the current web-mediated world, information is plentiful.  How much of it can we absorb, utilize, or make sense out of?  

Collecting the data is only the first step of a full scale process.  Data have to be cleaned, organized, and presented in a format and fashion understandable to the audience. All this is complicated by an overabundance of data.  It requires humans with the training and talent to choose and deploy data to maximum effectiveness.

The result is that we become slower to process our information, slower to make it into usable data, and slower to interpret the streams of data now at our disposal.  This is no service to the academy.   

Wise decisions rest upon data gold, but if we spend our limited resources on gathering every straw, spinning it into actionable form may suffer from a need for magical rather than procedural solutions.

Wednesday, February 27, 2013

Mindful Assessment - Efficient Evaluation with Limited Resources



EFFICIENT ASSESSMENT BASICS

Acknowledgements

By no means do I have all the answers, and I am drawing on a large pool of generous experts whose work has inspired me throughout my career as an assessment person.  I owe my role models a huge debt of gratitude and hope to pass on what I have learned from them.  There is likely to be more wisdom available than what I have agglomerated here, so please use this material as a springboard to your own further investigations.  Any inaccuracies or misstatements are, of course, mine.

Introduction

All too often we panic and try to assess everything all the time.  It’s really not necessary, and especially in this time of budgetary constraints, we can be much more successful at evaluating our systems and programs if we focus our efforts in a more conscious fashion.

What are the keys here?

1.  First, you may want to be clear about why you are doing assessment.  There may be more than one reason, and these reasons may lead to apparently controversial arrangements in how assessment is designed and implemented.
2.  What are your overall outcomes for assessment as a whole?  These macro level outcomes should be as clearly stated as any program or performance level outcomes.
3. Don’t try to make assessment harder than absolutely necessary.  First we want to achieve simplicity, proficiency and efficiency.  Fancy can come later. 

I. The Context of Assessment

It matters what environment you are in, when deciding what assessments to perform.  For example, if you have been only measuring your success at getting new students to enroll, but your legislature is suddenly far more interested in funding based on graduation rates, you may want to expand your repertoire.

You may also want to evaluate the culture of evidence and the readiness to use data-driven decision making in your organization.  There may be a need to build awareness, identify thought leaders and early adopters, and encourage public recognition of using assessment results to support recommendations.

II. Positioning Your Outcomes

When measuring outcomes, you want to be sure you are capturing the most accurate data possible so you can make good decisions based on it.  One idea you will hear about is called validity, and this has to do with the accuracy of measurement, or are you really measuring what you think you are measuring?  A related idea is called reliability, and that has to do with the consistency of your measuring…as in, are you measuring exactly the same way every time?

It’s worth the extra meeting time to go over your outcomes and measures carefully to make sure you are gathering the right stuff for your assessments to be meaningful, not only to the local conditions for your students as they learn, but also for the larger context of education that is changing all around us. 

III. Achieving Simplicity

You’ve heard people say “Keep it Simple…” when talking about assessment design.  Not only is this good practice when you are new at something, it is also easier to be convincing to outside reviewers when you present your results.  Achieving simplicity is harder than it looks, oddly.  You want the smallest reasonable set of measures that fully investigate and document the outcomes you have set for yourselves (also called parsimonious modeling).

IV. Deciding When to Get Fancy

When the terror of being new at assessment gives way to boredom with the <ho hum> same old measures, then it’s time to think about getting more sophisticated at what you are doing to measure success and drive improvement.

V. Where to Find More Information

Here are a few good places to start finding out more about assessment:


There are many more…happy hunting!