Home   |   Forum   |   Digest Archives   |   Issues   |   Contributors   |   Resources: Print + Online   |   Contact   |   Login

Interview with Jonathan Shedler

04/23/2014 2:22 PM | Scott Money (Administrator)

In recent years, pressures from managed care and the evidence-based movement have led many researchers to demonstrate the efficacy of psychoanalysis and psychodynamic therapy.   Jonathan Shedler has been a torchbearer for such research, arguing against the typical public narrative where evidence-based approaches (particularly cognitive-behavioral therapies) enjoy empirical support while analytic/dynamic therapies do not.  In his now famous 2010 article “The Efficacy of Psychodynamic Psychotherapy” (PDF), Shedler reviewed meta-analysis research to assert that treatment gains in dynamic therapy are equal to those of other modalities, and that the latter may actually be efficacious due to their use of dynamic techniques.

In the following interview, Shedler discusses his attempt to bridge research and clinical practice, arguing that being a good scientist requires being a skilled clinician.  He suggests that the apparent rift between research and psychodynamic therapy is not intrinsic to the two disciplines, but rather a byproduct of scientists lacking clinical experience and clinicians underappreciating the challenge of doing sound scientific research.  Regardless of where the reader stands on this enduring debate, Shedler is masterful in highlighting the attitudes and cultures of science and psychoanalysis that have kept the two apart for so long.


Below the jump, the editors of DIVISION/Review interview Jonathan Shedler 

What got you interested in psychoanalytic research? What is your research background (grad school, work, etc.)?

I’ve worn two hats from the beginning.  I graduated from the personality program at the University of Michigan.  I worked with faculty mentors doing research in personality and social psychology.  I also took the clinical curriculum.  At the time, the Michigan clinical program was basically a psychoanalytic institute.  So I lived in two worlds.  When I was with the researchers, I was a researcher.  When I was with the clinicians, I was a clinician.

My research mentors didn’t understand why I was interested in “all this psychoanalytic stuff.”  At various times, I heard it referred to as religion, superstition, and a cult.  I also encountered resistance from some of the psychoanalytic faculty, who regarded me as an interloper.  What I was trying to do just wasn’t “done.”  There is a lot of talk these days about the science-practice schism.  I was living the science-practice schism.

This experience shaped my view of the profession.  I saw how clinicians and researchers did not talk to each other.  I saw first-hand the insularity and arrogance of some of the psychoanalytic clinicians of the time.  I also saw how arid, artificial, and fundamentally unpsychological research in psychology could be, and how irrelevant much of it was to the needs of real clinicians and real patients.

A certain amount of psychological research looks to me like a game of “let’s pretend”:  let’s pretend we are studying something important while ignoring virtually everything that is psychologically meaningful.  For example, too many studies rely on responses to self-report questionnaires.  If you believe that unconscious mental life matters, this kind of research can seem silly and superficial.  I resolved that my own research would always address unconscious mental life.
Living the schism sounds like a conflictual place to exist. How do you find yourself managing both sides, or have you been able to build an effective “bridge” between?

It has not always been an easy place.  I’ve been viewed with suspicion in both camps.  There are academic researchers who try to dismiss me as a “psychoanalyst,” which has become a term of derision in certain circles.  Some psychoanalysts want to put me in a compartment, as “the researcher”—in other words, not really “one of us.”  As if conducting research is somehow a disqualification from being a legitimate psychoanalytic clinician or scholar.

It’s okay, because I’m not looking for “easy.”  Easy is overrated.  It’s easy for researchers to shape research questions to fit a certain research method, instead of tackling questions that are psychologically meaningful.  It’s easy for psychoanalytic clinicians to embrace a certain theoretical model and apply it to every patient, whether it fits or not.  Good research and good clinical work are not “easy.”  We might benefit—individually and as a profession—from spending more time in “conflictual places.”

That said, I’ve found a number of bridges.  One is my work on personality patterns and disorders using the Shedler-Westen Assessment Procedure (SWAP).  My collaborator Drew Westen and I designed the SWAP to (among other things) assess the spectrum of psychological processes relevant to psychoanalytic case formulation—for example, characteristic conflicts, defenses, internal and external object relations, self experience, transference propensities, desires, fears, fantasy life, and so on.  It’s an assessment instrument completed by clinicians, not patients, and it forces clinicians to consider a patient through multiple theoretical lenses.  This research has made me a better clinician and deepened my understanding of psychoanalytic theory.  It has helped me understand which theoretical models apply when, where they illuminate and where they obscure, and their practical treatment implications.
Initially, from a quantitative perspective, researching unconscious mental phenomena seems like a difficult process. How do you go about developing such a skill set?

First, you have to be a real clinician.  This means devoting time to clinical practice, treating a broad spectrum of patients, and undergoing personal psychotherapy or psychoanalysis.  Psychoanalytic concepts are just abstractions unless you experience them first hand, in clinical work and in personal therapy.  You can’t really understand the concept of a transference-countertransference enactment, for example, just by reading about it.

Before we try to quantify any clinical concept, we should understand it at a deep level.  In his book Outliers, Malcolm Gladwell described the “10,000 hour rule.”  Basically, in any area of human endeavor—athletics, music, computer programming, creative writing, psychotherapy, anything—it takes 10,000 hours of focused practice to develop mastery.  I believe this.  This is one reason why I’m skeptical of “clinical” research conducted by people who lack clinical practice experience.

The other piece of the puzzle, for me, is psychometrics: how to measure psychological phenomena.  It sounds boring and people often take it for granted, as if it were something trivial or obvious, but it is not.  Psychological phenomena can be extraordinarily difficult to quantify.  I worked with Warren Norman at Michigan and Jack Block at Berkeley, two of the greatest psychometric minds of their day.  They weren’t clinicians but they understood how to quantify complex, nuanced psychological processes.

Sometimes psychoanalytic clinicians who attempt research get disappointing results, not because their hypotheses are wrong, but because they do not appreciate the psychometric challenges they are tackling.  Their measures may not capture the richness and complexity of psychoanalytic concepts, or they may end up with data that contain too much measurement error to be scientifically useful.

This brings us back to the science-practice schism.  We have empirical researchers in the business of quantifying things, but they lack clinical experience and do not necessarily understand what is important to quantify.  From a psychodynamic perspective, they sometimes end up studying trivia.  We have sophisticated clinicians who may have a deep understanding of clinical phenomena, but they don’t know how to translate clinical ideas into researchable concepts.  If people with these different skill sets valued one another, communicated, and worked together, great things might happen.  But generally, they don’t.


How would you recommend that the profession begin to bridge the schism?

That’s above my pay grade.  We’ve been hearing rhetoric about bridging the science-practice schism from the highest levels of APA leadership for as long as I can remember.  The rhetoric waxes and wanes with APA election cycles.  But it never amounts to anything more than rhetoric.

It might be more helpful to discuss some of the things that exacerbate and perpetuate the schism so we can see it more clearly.  From my perspective, the schism is getting worse, not better.  One contributing factor is that we have institutionalized the science-practice schism by bifurcating training in clinical research and clinical practice into PhD and PsyD programs, respectively.

There was a time when clinical psychology PhD programs at research universities included faculty members who identified primarily as clinicians and faculty members who identified as researchers.   They came into contact with each other and developed some appreciation of one another’s perspectives.  Even researchers who had no professional interest in clinical topics developed some understanding of what clinical work is about.  This is no longer so.

The demands at research universities to publish and get grants have become extreme.  Faculty members cannot devote time to clinical practice; it would be professional suicide for an untenured faculty member to treat patients while others accumulate publications and grants.  As clinicians have retired, they have been replaced by “clinical researchers” or “clinical scientists” with little practice experience.  The trend is toward APA accredited “clinical” PhD programs without clinicians.  Faculty members at these programs may have no idea what good clinical work even looks like.

This is one reason we keep seeing comments in the news media from prominent researchers that are dismissive and denigrating of in-depth psychotherapy.  Not long ago, for example, former APA president Alan Kazdin told Time Magazinethat individual psychotherapy is “overrated and outdated” and bemoaned that too few patients receive “evidence-based treatments like cognitive-behavioral therapy.”

I don’t think anyone who knows what good psychodynamic therapy looks like and can accomplish would consider 12 or 16 sessions of manualized CBT to be good psychotherapy.  And scientific research does not show that it is more effective (see my blog, Bamboozled by Bad Science).  I’m not singling out Professor Kazdin, whose research I respect.  His comments reflect prevailing attitudes.  Clinical researchers routinely extol the brief, scripted therapies studied in research laboratories and denigrate psychotherapy as most of us understand and practice it.  Policy makers and the public are getting a steady diet of misinformation about psychotherapy, coming mostly from people who do not actually practice psychotherapy.

As clinicians have disappeared from research universities, real clinical training has moved increasingly to the professional schools.  This bifurcation of the profession has consequences.  In research-oriented clinical PhD programs, theory and research develop in isolation from the crucial data of clinical experience.  In freestanding professional schools, clinical training can become divorced from the scholarly and intellectual traditions of university life and the critical thinking it fosters.  Training in both kinds of institutions suffers, and the science-practice schism grows.

These are structural problems without easy solutions.  Research universities now depend on the grant money researchers generate, and they have no incentive to reward or encourage clinical immersion.  Many free-standing professional schools have become profit centers, and they will continue to operate as they do so long as it is financially profitable.  Both clinical research and clinical training have become industries.

These comments barely scratch the surface.  Readers who want a more thorough discussion may want to read my article,Why the Scientist-Practitioner Schism Won’t Go Away (Shedler, 2006).


Can you describe some specific research findings that influence your work with patients, and offer a clinical vignette that illustrates the finding’s applicability?

A question can sometimes reveal more than an answer, and that may be the case here.  I know this question was drafted by the Research Committee and reflects a sincere desire to support research and demonstrate its relevance.  But the question reflects an assumption about the relation between research and practice that I don’t share.  It implies there is or should be a direct relationship between research findings and clinical intervention: “Because this study showed X, I do this or that in psychotherapy.”  That is not my understanding of the role of research.

Knowledge is about constructing narratives.  A clinical case formulation is a form of narrative.  So is a scientific theory.  A narrative weaves information together in a way that makes sense of it.  It helps us see how the pieces fit and relate to one another.  A sound narrative is internally coherent, is consistent with what we can know and observe, accounts for as much relevant information as possible, and helps us anticipate what is likely to come.  Our narratives should be dynamic, not static, so we are continually reworking and revising them as new information emerges (like Piaget’s concept of assimilation and accommodation).

For me, empirical findings are one of many streams of information that help shape the narratives that reflect our psychological knowledge and understanding.  The narrative transcends the information it incorporates and represents a synthesis of everything we learn from all sources—from our patients, from theory, from teachers and supervisors, from empirical research, from our countertransference responses, from our personal analyses.  Everything is in the mix.

From this perspective, asking how a specific research finding influences my work with patients is the wrong question.  The direction of influence is not one-directional, from research to clinical work.  It is not even quite right to say the influence is reciprocal or dialectical, although that would be better.  I would say, rather, that everything plays a role in shaping the narratives or working models that inform my clinical work.  (See Shedler, 2004, for more on the relationship between psychoanalysis and research.)

I also don’t draw a sharp distinction between thinking clinically and thinking scientifically.  Good science and good psychotherapy require critical thinking.  Psychotherapy is, among other things, a shared, collaborative process of observation, hypothesis generation, hypothesis testing, and hypothesis revision.  When I do research, I am sharpening my clinical skills.  When I treat patients, I am sharpening my research skills.

A psychoanalytic interpretation is really a hypothesis, and we generally present it to the patient as such: as an idea for mutual consideration and reflection.  What happens next provides data that help us revise, refine, or elaborate on the hypothesis, or discard it and formulate a different hypothesis.

I don’t mean to give the impression that psychoanalytic work is just an intellectual process.  Far from it.  We enter the therapy relationship with our whole selves.  We immerse ourselves in the relationship and experience it from the inside.  We experience it emotionally.  Yet we must be able to shift between experiencing and reflecting on what we experience.  A therapist who engages primarily on an emotional level, without stepping back to reflect and understand, is either participating in an unwitting enactment or else offering the patient little more than emotional pablum.  A therapist who approaches the work in an emotionally distant, intellectualized way will miss everything important.  The work requires heart and head, yin and yang.  In classical analytic language, we must be able to move fluidly between experiencing ego and observing ego, and we work to develop this capacity in our patients as well.

What is sometimes difficult to convey to people who are not psychoanalytically trained is that the most important data emerge from the therapy process, not just the manifest content of our patients’ words and actions.  For example, they emerge in the transference-countertransference enactments that we inevitably find ourselves participating in as we engage with our patients.  Our data are not limited to what our patients tell us.  They include what they show us through their interactions with us, and the emotional reactions we notice in ourselves as we engage with them, and what they communicate metaphorically or symbolically through their associations.

I know of exactly one book that explicitly discusses psychotherapy in terms of hypothesis generation and hypothesis testing and emphasizes how the therapy process—what happens in the room between patient and therapist—provides the crucial data.  The book is Beginnings: The Art and Science of Planning Psychotherapy by Mary Jo Peebles.  I recommend it to all my students and supervisees, from first-year graduate students and psychiatry residents to advanced psychoanalytic candidates.

I could, of course, come up with examples of how particular research findings influence my clinical work, but in doing so, I’m afraid we might miss a larger, more important truth.


So will you tell us about a specific research findings that has influenced your work with patients?

Now that I’ve discussed the misconception embedded in your question, yes.  There’s still a certain amount of debate about the concept of borderline personality organization.  The historical debates between Kernberg and Kohut are legendary, and the topic can stir rancor even now.  Some psychoanalysts view the concept of borderline personality as a throwback to a “one-person psychology” that pathologizes the patient for experiences that are co-constructed in the therapy relationship.  On the other end of the spectrum, I occasionally hear very classically-oriented psychoanalysts reject the idea that personality can be organized around splitting (versus intrapsychic conflict), and dismiss the model with the ultimate psychoanalytic put-down: “It’s not psychoanalysis.”  But there are empirical questions here.  There is no use trying to resolve them through philosophical or ideological debate, or by appeal to authority, or through competing clinical case studies selected and crafted to demonstrate whatever a theorist wants them to demonstrate.

The defining hallmarks of borderline personality organization, as Kernberg conceptualized it, include splitting, projective identification, identity diffusion, and affect dysregulation.  The Shedler-Westen Assessment Procedure (SWAPShedler & Westen, 2010) assesses all these phenomena.  The SWAP consists of 200 personality-descriptive statements that a clinician scores according to their relevance to a patient, from “not descriptive” (scored 0) to “most descriptive” (scored 7).  The possible combinations and permutations of SWAP items are virtually infinite.  They allow a knowledgeable clinician to provide a comprehensive description of a patient’s personality functioning in a way that captures the person’s psychological complexity and uniqueness.

We used the SWAP to study personality in large national samples of patients.  In a recent study, 1,201 psychologists and psychiatrists described a patient randomly selected from their practices.  We used statistical techniques to identify naturally occurring diagnostic groupings in the patient sample—that is, groupings of patients who share core psychological features in common, that distinguish them from other patients (Westen, Shedler, Bradley, & DeFife, 2012).

In every sample we studied, we found a grouping of patients that unmistakably fit the theoretical description of borderline personality organization.  Some analysts may object to the concept on philosophical or ideological grounds, but burning the map does not destroy the territory.  The phenomenon exists.

And guess what?  The core, defining features of borderline personality organization do include splitting, projective identification, identity diffusion, and affect dysregulation.  These are theoretical terms.  The actual SWAP items are written in plain English, not theoretical jargon.

SWAP items like the following helped capture the phenomenon of splitting:

When upset, has trouble perceiving both positive and negative qualities in the same person at the same time (e.g., may see others in black or white terms, shift suddenly from seeing someone as caring to seeing him/her as malevolent and intentionally hurtful, etc.).

Expresses contradictory feelings or beliefs without being disturbed by the inconsistency; has little need to reconcile or resolve contradictory ideas.


SWAP items like the following helped capture the phenomenon of projective identification (consider the meaning of the items in combination, not singly):

Tends to see own unacceptable feelings or impulses in other people instead of in him/herself.

Manages to elicit in others feelings similar to those s/he is experiencing (e.g., when angry, acts in such a way as to provoke anger in others; when anxious, acts in such a way as to induce anxiety in others).

Tends to draw others into scenarios, or “pull” them into roles, that feel alien or unfamiliar (e.g., being uncharacteristically insensitive or cruel, feeling like the only person in the world who can help, etc.).

Tends to confuse own thoughts, feelings, or personality traits with those of others (e.g., may use the same words to describe him/herself and another person, believe the two share identical thoughts and feelings, etc.)


These are examples of SWAP items that consistently received high scores for patients in the empirically-identified borderline grouping.  They received high scores from clinicians of all theoretical orientations.  In other words, it didn’t matter whether or not the clinicians understood or “believed in” the theoretical concepts (or had ever been exposed to them at all).  When asked to describe their actual patients using descriptive statements written in plain English, this is what clinicians of all theoretical orientations observed and reported.  (For more information about SWAP, visitwww.SWAPassessment.org)

So it turns out that the concept of borderline personality organization, as conceptualized by Kernberg and more recently explicated in Nancy McWilliams’s classic book Psychoanalytic Diagnosis, is empirically sound.  Clear treatment strategies follow from this.  Among other things, we need to recognize and interpret splitting where it occurs.  This means helping a patient hold in mind, at the same time, contradictory attitudes and feelings that he or she is accustomed to compartmentalizing.

For example, I am now treating a patient with narcissistic personality dynamics, organized at a borderline level.  He oscillates between feeling superior and feeling inadequate but he doesn’t experience this inner contradiction.  When he feels superior, that is what is real to him, and his co-existing feelings of inferiority seem inaccessible and irrelevant.  Likewise when he feels inadequate.  When he expressed both sets of feelings sequentially during the same session, I had an opportunity to do some work around this.  I said, “Sometimes you feel that you are better and smarter than everyone else.  And sometimes you feel that everyone else is better and smarter than you.”  My comment was aimed at bringing together what my patient had been keeping apart.  I will likely be making comments along these lines throughout the treatment.  I’m also experiencing splitting in the therapy relationship.  At the moment, I’m a good, idealized object.  I have little doubt that soon enough, I’ll be a bad, devalued object, and I will be prepared to interpret that too.

Here is an instance where there is convergence between psychoanalytic theory, my subjective experience of my patient in the therapy relationship, and a specific research finding.



Shedler, J. (2013, October 31). Bamboozled by bad science: the first myth about “evidence-based” therapy.  Retrieved from http://www.psychologytoday.com/blog/psychologically-minded/201310/bamboozled-bad-science

Shedler, J. (2004).  Book Review: Joseph Sandler, Anne-Marie Sandler, & Rosemary Davies (Eds.), Clinical and Observational Psychoanalytic Research: Roots of a Controversy.  Journal of the American Psychoanalytic Association, 52, 2.

Shedler, J. (2006).  Why the scientist-practitioner schism won’t go away.  The General Psychologist, 41, 2, 9-10.

Shedler, J. & Westen, D. (2010).  The Shedler-Westen Assessment Procedure: Making personality diagnosis clinically meaningful.  In J.F. Clarkin, P. Fonagy, & G.O. Gabbard (Eds.).  Psychodynamic Psychotherapy for Personality Disorders: A Clinical Handbook.  Washington, DC: American Psychiatric Press.

Westen, D., Shedler, J., Bradley, B., DeFife, J. (2012).  An empirically derived taxonomy for personality diagnosis: Bridging science and practice in conceptualizing personality. American Journal of Psychiatry, 169, 273-284.

© 2017 - Division | Review


Powered by Wild Apricot Membership Software