Thursday, September 9, 2010

Top Ten List for Spotting Toxic Organizations

Maybe it was what I learned by co-teaching a doctoral course on organizational performance earlier this summer. Or maybe the trouble started when large celestial bodies achieved apogee. From wherever source, I've spent some effort lately trying to understand toxic organizations and the damage they cause themselves and others. Whether they've learned to be bad on purpose or not, I've concluded that it's not just their ineptitude that should be the focus. The wider point is that by blowing themselves off course, they lose the capacity to serve.


The skill set for consultants who practice for more than money should include spotting low achieving organizations. Once spotted, a cautious consultant needs to determine whether an organization's toxic behaviors can be overcome so that progress toward a greater good can occur. Next, the consultant needs to make a judgement about whether working with a bad organization is worth her or his professional energy.  My bet is that even the most gifted of consultants can't turn around a purposefully toxic organization and I'm an optimist!  Let me explain.


We are all human and like most organizations, we all make mistakes from time to time.  Blindly making the same mistakes over and over again, however, is not just silly, it's beyond stupid. Learning is key and most organizations don't know what they don't know, a very human condition. But let's not confuse that with toxicity.


Losing organizations cling to old, familiar behaviors while actively resisting efforts to turn ignorance around. A learning organization--on the other hand--understands the value of being wrong and celebrates the journey necessary to getting better. Bad organizations believe they are better than they are based on faulty internal perceptions. These misperceptions cause them to stumble but most bad organizations whistle by this graveyard. Denial almost always leads to consultant abuse.

Several weeks ago, I fired a toxic organization after a steadily devolving relationship spanning five years. This organization (who'll won't be named here) had been given a sweetheart role in a national initiative that far exceeded its own experience and expertise. The gap between the goals of the project and the organization was to be filled with consultants who, in turn, would serve education providers.  All went reasonably well in the early years of this work as the organization and its first wave of consultants coalesced around the mantra of "building a bicycle while riding it."  Sadly, though, as the work became more complex and trust between the organization and its partner organizations deteriorated, the organization began sliding down the slippery slope named "control."  Those of you familiar with Theory X management will quickly recognize this behavior.

Most organizational consultants naively believe in the existence of surefire cures for bad organizations. I'm no exception. As the story goes, bring them to understand their own shortcomings and possibilities to do better--painful as that process may be--and they'll get better by leaps and bounds, right?  Well, no. Organizations don't get better unless they sincerely want to. Consultants who are in the field solely for the money develop ways of sidestepping repugnant organizational behavior.  However, for those who come at their work for the greater good and service to the bottom line, there are abundant clues that should guide whether an organization is worth the outlay of spend professional energy.   Here's Rick Voorhees' Top Ten List for Spotting Toxic Organizations:

1. Does the organization truly understand the expertise that consultants bring?  Organizations that hire consultants realize that there are some things they don't know but does the organization recognize that its consultants are critical internal customers?  Or are consultants seen only as economic units?

2. What is the organization's track record in dealing with consultants and other partners? Have other consultants told the organization to take a hike?  Why?  Finding out why others won't work with the organization can save valuable time.

3. What is the trust quotient within the organization? Fear causes weird behaviors including micromanagement. The exemplar organization became increasingly paranoid, bureaucratic, and controlling over time. The pinnacle of dysfunction came for our poster child organization  when they blithely announced that their consultants would be required to sign agreements that would have required the organization's approval for consultants to work with other partner organizations. Since the work isn't exactly top secret, say, like the Manhattan Project, there really was no reason for such a requirement other than the aforementioned paranoia. Organizations that truly understand that a networked world also understand that they can't regulate expertise.

4.  Does the organization treat its consultants like employees?  There's a distinct role for consultants (and the expertise they bring and work they perform) versus what the organization can require of its own employees. I once was told by a junior staff member that the organization could require servile behavior from its consultants since they were, in fact, the same as employees. This has obvious and dangerous legal implications for the organization including workmen's compensation, health insurance, Of course, the junior staffer now denies ever saying this. I wouldn't admit to such an ignorant, dangerous statement either.


5. Does the organization have have a transparent and systematic way of evaluating its own performance?  It seems disingenuous that a national organization underneath the umbrella of data-driven decision making would never consider gathering data about its own performance.  Good for the goose but bad for the gander.


6. Does the organization communicate internally? If not, it can't communicate externally. The exemplar organization attempted to double their dues for constituent institutions shortly after a national meeting in which the leadership of these institutions were present.  It wasn't mentioned at the national meeting; instead they learned about it through email two weeks later. Neither a good communication strategy nor a way to inspire confidence.


7. How does the organization hire consultants?  Is there an open call for expertise or is it done by word-of-mouth only?  Do candidates have to be a club member or are credentials and expertise the criteria?  


8. Does the organization hire its own staff and family members as consultants?  In other words, does the organization condone nepotism and cronyism?  

9. Are ego needs out of whack? Do lurking psychological needs outweigh the importance of the work?  Our toxic organization felt very threatened when given opinions opposite of those their controlling nature believed.

10. Finally, does the organization actually consult with its own consultants?  Or, is their advice ignored except when very convenient.  Consultants are a critical internal resource and organizations who are too hidebound to learn from them probably ought not to have them.

No single flaw by itself is fatal if the organization can identify it and turn it around. When two or more appear in combination, however, a train wreck is in the offing. Bad organizations can't identify the signs of impending doom without expert intervention. But when #10 is in play, their chances of getting better are zilch. Better to walk away than to be like Sisyphus, constantly pushing a ball up a hill. Sometimes, you see, it's just the wrong hill.

Sunday, May 9, 2010

The Fifteen Week No-Solution

Semester-long classrooms everywhere are missing their students. Is the academic term too long? Are institutions missing big opportunities by being slaves to the traditional calendar? Are we numb to other possibilities to promote learning? To each question my response is “yes!” As my esteemed colleague Kay McClenney says, “…in higher education, for every problem we have a 15-week solution.”

I won’t go deep into the bureaucracy of how states fund their public institutions. Suffice to say the narrow focus is to standardizing the time required to offer a 3 credit-hour class to ensure that no institution short circuits the budget process. When the time that students occupy a classroom seat is made equal across the public sector, we can be sure that learning is equal, right? Well, no.

To catch up with the rest of the organized world, the US push is to help more students to complete degrees. We ought to use this opportunity to develop new pathways for them to demonstrate what they already know so that they don’t need to sit through classes that have been designed in ways that presume they don’t know much. Somehow, many traditional academics have convinced themselves that leapfrogging classes or sequences of classes can’t be done without great harm. We don’t know if this argument is based on harm to students or harm to the curriculum but most often it’s voiced as the former when it’s actually the latter. Accelerating learning can be done, however, and done with rigor, based on competency-based approaches. Too often, we’ve let the undergraduate curriculum become a fortress, and not the tool it should be by providing multiple entry and exit points designed to maximize student learning.

While traditional time-based measures are endemic across higher education their impact is most often felt in community college developmental education programs. Seventy percent or more of new community college students are referred to one or more remedial classes. Most of this group begins postsecondary careers needing three classes in developmental mathematics. It’s quite possible therefore—in the tyranny of a fifteen-week term—that they won’t see the inside of a “regular” college-level classroom until almost two years after they first touch an application blank. Small wonder the probability of these students graduating or transferring to a 4-year college is in the single digits.

Inertia exacts a price. As Kay and others point out, reorganizing developmental curriculum to more fully meet a range of learner needs is daunting work. While there are few guideposts, I’m convinced that much of the angst would evaporate if community colleges would understand and use competencies as units of learning. I’ve seen a handful of community colleges enjoy great success in decomposing the credit hour class into manageable competencies. One of the colleges I coach, Bossier Parish Community College, has had outstanding success in offering individualized pathways for students in developmental math based on mastering specific competencies. BPCC has increased student success rates by an amazing 30 percent in the lowest level of developmental math using competency-based approaches over the traditional fifteen-week approach.

There is a recent, resurgent interest in competencies as curricular building blocks. The Lumina Foundation on Education and its “Tuning USA” initiative is leading the way in facilitating competency-based models for select disciplines in Indiana, Minnesota and Utah. Several years back I edited a sourcebook with some very wise colleagues entitled, Measuring What Matters: Competency-Based Learning Models in Higher Education It was designed as a toolkit for faculty and administrators to understand the layout of the curriculum, to look for overlaps in competencies among courses, and to identify opportunities to accelerate learning. Jossey-Bass indicates that it’s sold well, but given where we’re at with our devotion to the traditional fifteen-week term since it was published, perhaps not well enough.

Originally posted May 2010

Friday, October 9, 2009

Rick's Planning Rubrics


One advantage in a too busy professional life is that interesting work comes spiraling my way with increased frequency.   I spent most of last week at Stellenbosch University in South Africa after having spent several days the week before near the Kruger National Park looking at the Big 5 (Africa’s lions, leopards, rhinos, hippos, and cape buffalo…pictures available here. Prior to the too-indulgent photo safari in the Timbavati,  I also joined my colleagues at the South African Institutional Research Association at Port Elisabeth where I was an invited keynote (it’s a Prezi, a non-linear alternative to PowerPoint but because it’s web-based, you don’t need any software to view it).
At Stellenbosch, two very bright South African higher education pacesetters, Lynda Murray and Pieter Vermeulen, joined with me in an evaluation the university’s institutional research and planning function. These evaluations inevitably end up more wider in scope than a simple focus on  institutional research, however. It’s hard to keep an evaluation of an institution’s use of information inside a tight box, without commenting on the bigger world inside the university and outside. Stellenbosch was no different.
To say that South Africa’s higher education (tertiary) sector has undergone a major transformation since Reconciliation in 1994 could qualify as an all-time understatement. The post-apartheid era has caused institutions to rethink not just diversity but their approaches to the World in the 21st century. Earlier this year the education ministry was split into two parts with higher education joining training to cover private and public institutions. The scope of this re-constituted ministry function includes universities, colleges, and the skills development sectors, which include the Sector Education and Training Authorities (SETA’s) and the National Skills Authority and the National Skills Fund. Much more united and certainly ambitious than the American arrangement for higher education where fragmented policy is a fact of life.
Why talk about national changes? Stellenbosch is one of the oldest universities on the continent and has been a traditional leader in South African higher education. What Stellenbosch tries, many will emulate. To ask an external panel to review its use of data and information is tribute to evolving leadership and a willingness to ask hard questions. I’m not going to give you the details of what we found since that’s up to the University and its able institutional research and planning leader, Dr. Jan Botha, to distribute those details. I can say, though, that Stellenbosch has committed to furthering its leadership journey in South African higher education by using its data strategically and to rethinking its approach to strategic planning.
Certain truths fall out of any planning situation. I’ve been fortunate to work with some very bright minds in this business, especially Byron and Kay McClenney at the University of Texas, who are constantly pushing institutions to use their own data to create better opportunities for students to succeed. With their help and the scar tissue that any good consultant accumulates, I’ve steadily been adding to my list of critical elements to gauge institutional planning that I sometimes title–with tongue firmly in cheek–“Rick’s Rubrics.” I’ll share these here. If you’re curious about how they played out at Stellenbosch or in South Africa, drop Jan an email.
If you’re not planning, you’re planning to fail. Many institutions have glossy strategic plans but fail to operationalize them by explaining exactly who is doing what, how large its commitment (staff and dollar resources), and how it will know whether it all works.
Planning, unfortunately, oftentimes becomes a defensive activity. Many institutions proliferate unit planning to keep things “about the way they’ve always been” and there’s always the tendency to create a plan to satisfy an external audience, knowing that it means very little inside the institution’s own walls.
• Perfect data don’t exist. Most institutions won’t cross this threshold. Fear of failure punctuates this stance as does some general ignorance about what data is on hand and what can be created.
• Thin to Win. Who wants to read a long plan. Thick plans are a fodder for doorstops, usually. On the other hand, plans without a clear announcement to specific activities to bring about goals aren’t worth cost of paper. There’s a balance here and precision wins the day over the ponderous.
• It’s not enough for planning to be participatory; it also had to be decisive. Committees don’t carry out plans, but the wisdom of those who will carry out the plan is fundamental. Another balancing act, but let’s error on the side of making decisions not keeping all parties feeling good.
• Select 3 (maybe 4) “main things” that make a real difference. This is Byron’s critical lesson for me and others. Not atypically, I once evaluated an institution (not Stellenbosch) with 39 priorities. I asked how in the wide world, they could handle 39 priorities the response was that “we meet and talk about them.” I rest my case!
• Don’t expect a home run every time. Definitely an “Americanism” and forgive me the sports analogy, but I’ve also seen institutions grow quite tired of planning simply because the results aren’t visible, say, in six months or even a year. Planning is a journey, not an episode.
• Be flexible ready to adjust strategies and goals. Most institutions develop a strategic plan and never adjust it to fit emerging realities and new intelligence. A periodic review once a year is advisable, prior to setting new action priorities for the next year is advisable.
• Show results widely (even if ugly). Dirty news seldom survives at most institutions, unfortunately. A courageous institution uses ugly data to calibrate changes needed to address new priorities. Audiences sometimes hear from me that dirty data does not make you a bad person! I hope you see both the humor and the imperative.
• Link clearly to resources. A plan without a clear tie to human and dollar resources is not a plan, it’s a public relations piece. Over last decade I’ve seen accrediting agencies awake to institutions with gloss without substance. A good thing.
• Most critically: separate the operational from the strategic. Most institutional managers get hung up here by thinking that their day-to-day activities are strategic when, in fact, they are usually operational (but quite excellent, as I’ve found). I always get back to doing three (or no more than four) things very well. Most institutions would do well to define what they mean by “operational excellence” before they pursue strategic goals that are tipping points for the future.
Rick Voorhees (home in the US after two thrilling and professionally satisfying weeks in South Africa).