Don’t get me wrong…I love data like only geeks can. But even I get cross-eyed at some of the
proposals going around now about accountability and benchmarks and comparison
tools.
Imagine with me, please, what our enterprise data systems
would look like if we were able to gather, store, organize, and retrieve all
the data that our many analysts inside and outside the academy have
suggested. Never mind the significant
state and federal data collection efforts.
I do dream of huge multidimensional cubes of data waiting
to be mined for nuggets. I salivate like
Pavlov’s anticipatory canines at the possibility of predictive modeling using
all possible variables.
Yet if we
actually had all that data, would we really be able to use it to make sensible
recommendations on a reasonable timeline?
Alas, probably not.
In the current web-mediated world, information is plentiful. How much of it can we absorb, utilize, or
make sense out of?
Collecting the data
is only the first step of a full scale process.
Data have to be cleaned, organized, and presented in a format and
fashion understandable to the audience. All this is complicated by an
overabundance of data. It requires
humans with the training and talent to choose and deploy data to maximum
effectiveness.
The result is that we become slower to process our information,
slower to make it into usable data, and slower to interpret the streams of data
now at our disposal. This is no service
to the academy.
Wise decisions rest upon
data gold, but if we spend our limited resources on gathering every straw,
spinning it into actionable form may suffer from a need for magical rather than
procedural solutions.
No comments:
Post a Comment