Table of Contents
|
My training in measurement and psychotherapy at the University of Minnesota
did not prepare me for my encounter with multi-cultural, multi-problem
patients in a large downtown metropolitan general hospital. For a period
of time I provided psychological services to the entire hospital, all departments.
Psychoanalytic-based and dynamic psychology therapies were appropriate
for only a very small number of patients. The available psychological measures
were often useless and irrelevant, with the exception of intelligence and
brain damage measures. The disconnect between professional training
and post-graduate work environments was typical of that time.
It
was assumed and believed that we (primarily the psychiatrists, but the
other mental health professions as well) knew what we were doing and how
well we were succeeding. There was no challenge to the accuracy of these
beliefs from within the professions and from the ruling lay population.
“If you want to know how I am doing, ask me. I am the authority.” This
is an actual quote.
Treatment
activity was not constrained or judged by funding agencies, notably the
insurance industry, or consumer representatives.
In
the 1960’s there was a powerful zeitgeist favoring minorities, women, and
others having a relative disadvantage in economic and political power.
There was a strong message emanating from the National Institutes of Mental
Health and local state mental health leaders that required attention to
and reform of mental health services, particularly local hospital and state
hospital mental health patient care.
Evaluation
of programs and systems of care were just being initiated and mandated.
The topic of evaluation was the domain of prominent psychiatrists and sociologists.
A professional society was established to provide a forum for the leading
figures in this new movement. The topic of evaluation also drew its life’s
blood from the funding and requirements set forth by federal mandates that
insisted on active evaluation of mental health services. A newer set of
individuals emerged from program directors and program innovators that
presented their work in the form of evaluation presentations and publications.
Mental
health programs and services were run by professionals (commonly psychiatrists
at first and later other professions as well) that had no formal training
in administration, management, financing, and measurement. The intrinsic
model was the extension of office psychiatry treating mental pathology
with little or no understanding of community, special populations, or the
realities of true rehabilitation and reintegration into daily living within
a context of multiple problems and constraints.
Modifications
of treatment and treatment theory were a fluid process with few if any
research-based rationales. For instance, our training and treatment supervision
that placed emphasis on the reconstruction of homosexuals into normal heterosexuality
suddenly disappeared, without notice. Stereotypes of minorities and women
were fully integrated into treatment formulations and practice. These also
disappeared without formal notice. Sexual abuse of patients had not yet
achieved the attention and remedy of current day practice. The changes
occurred because of social (including within-profession organizations)
and political pressure, not research-based decision making.
The
fluid, changing overall context and an associated lesser level of constraint
(compared to today) permitted creative individuals to develop new programs
and innovations often with far reaching consequences i.e., Day Treatment,
Suicide Prevention, Crisis Intervention, Sexual Assault Services, Community
Mental Health Programs, Psycho-social Rehabilitation Programs, and a variety
of evaluation methods, including Goal Attainment Scaling.
One
over-riding characteristic of that period was that, with very rare exceptions
and notwithstanding all the diversity of training and background, everyone
was doing their best to help their patients. While the concepts and tools
were ill-suited to the tasks required of them, there probably were effective
treatments mixed with powerful nonspecific and placebo effects. These treatments,
delivered in settings that were safe and containing an ambience of healing,
brought about many positive changes in the lives of patients. Many therapists
found Goal Attainment Scaling to be self-evident and a useful tool in treatment.
Typically, they did not write articles, they saw patients.
GAS
proved to be an interesting and useful method but required adaptation to
particular settings. Standardized measures were easier to use, but included
many items that were irrelevant to the particular setting or individuals,
including the therapists. Smith and Cardillo demonstrated that by including
only those items in standardized measures that were relevant to individual
cases, the distinction between standardized and individualized measurement
largely disappeared. Smith and Cardillo also stressed the idea that GAS
was not a measure of current status but was a measure of change, thus linking
the method to familiar statistical concepts. I had always thought of GAS
as a measure of prognoses – a concept familiar in the clinical process.
Early
criticism often involved modifications of the method that precluded comparison
with our reliability and validity findings. Much of our early writings
were in defense against challenges that have now largely evaporated. I
have about 1,200 references in my Endnote database, but using summaries
and abstracts, I have found little or no interest in the methodological
topics. Instead, nearly all the articles and dissertations describe applications,
their results, and judgments of usefulness.
One
of my psychological colleagues privately confided to me that it was particularly
galling to him that he had spent so many hours studying and mastering psychological
measures only to be confronted with a method that anyone can make up!
One
pervasive problem that exists today is how to compare different treatment
settings. If the patients, therapists, conditions, and available resources
are the same or very similar, then the comparisons using standard measures
(medication errors, side effects, costs, quality of treatment measures)
can be used. You would still have the problem of within-setting patient
specific expected outcomes. As you might expect from me, I see this as
only the development of Follow-up Guides for organizations.
An
interesting straw man argument used here and abroad, tried to force the
choice between standardized and individualized measures. It was as though
I had stated that we should use only individualized measures for everything.
Nonsense! Of course, even if you were measuring height you would have to
use both types of measures, the established measure of distance and the
expected or appropriate height for individuals. In reply, I used Kierkegaard’s
titles “Either Or,” and “Both.” This would put off my challengers into
a cloud of uncertainty and imply that the confounding complexities of Existentialism
might lie before them.
One
image comes to mind - after delivering my expert presentation somewhere
in Canada, a nun came forward and asked me if there had been any applications
to community interventions and to ethnic populations. Since I was the expert,
I held forth at some length. Only later did I learn from her that she and
her religious order had been using the method for three years in a Native
American community. She was considerably more experienced than I on this
topic. When I urged her to publish their efforts, she stated simply that
it would not be compatible with their religious order for them to call
attention to themselves. I was a very chastened and thoughtful passenger
on my return flight.
So
how many other individuals and groups are simply going ahead and competently
and quietly using the method? When I asked a local Jungian psychotherapist
if she used GAS, she said, “Of course. It only makes sense.”
End of story.
I
am still receiving inquiries from Europe, Australia, New Zealand, and even
Minneapolis. What I particularly enjoy about what I see in Steve Marson’s
application and in other demonstrations that I have been able to examine,
is the maintenance of the essential heart of the matter: the full knowledge
of the professional specialty, the effort to know and understand their
clients/patients, and to seek the best fit between their professional efforts
and the needs of their clientele.
There
is some effort in carrying out this task. It is easier to simply hand the
client a test and set up an appointment for next week. When you are on
the receiving end of services delivered in this manner you know how you
feel. “I could have been anybody.” “Do they have any idea who I am?” The
extra effort involves establishing a relationship and mutual understanding.
The risk is that it also makes you accountable - unlike the routine processing
method that provides insulation against evaluation. In a psychiatric departmental
presentation that I made some years ago, a staff psychiatrist asked the
question, “Just who are you evaluating, the patient or us?” I didn’t answer.