« Do we need to worry about (complex) guidelines enhancing disparity or severity or both? | Main | Thoughts on class paper packet... »
October 24, 2011
"Measurement and Its Discontents" ... and modern sentencing laws and guidelines
The title of this post starts with the headline of this interesting commentary, which was published in yesterday's New York Times. Though not saying one word about sentencing, I thought many parts of piece (and especially the passages quoted below) were especially interesting and deserved consideration as we transition into our review and assessment of guideline sentencing systems:
Why are we still stymied when trying to measure intelligence, schools, welfare and happiness?
The problem is not that we don’t yet have precise enough tools for measuring such things; it’s that there are two wholly different ways of measuring.
In one kind of measuring, we find how big or small a thing is using a scale, beginning point and unit. Something is x feet long, weighs y pounds or takes z seconds. We can call this “ontic” measuring, after the word philosophers apply to existing objects or properties.
But there’s another way of measuring that does not involve placing something alongside a stick or on a scale. This is the kind of measurement that Plato described as “fitting.” This involves less an act than an experience: we sense that things don’t “measure up” to what they could be. This is the kind of measuring that good examples invite. Aristotle, for instance, called the truly moral person a “measure,” because our encounters with such a person show us our shortcomings. We might call this “ontological” measuring, after the word philosophers use to describe how something exists.
The distinction between the two ways of measuring is often overlooked, sometimes with disastrous results. In his book “The Mismeasure of Man,” Stephen Jay Gould recounted the costs, both to society and to human knowledge, of the misguided attempt to measure human intelligence with a single quantity like I.Q. or brain size. Intelligence is fundamentally misapprehended when seen as an isolatable entity rather than a complex ideal. So too is teaching ability when measured solely by student test scores.
Confusing the two ways of measuring seems to be a characteristic of modern life. As the modern world has perfected its ontic measures, our ability to measure ourselves ontologically seems to have diminished. We look away from what we are measuring, and why we are measuring, and fixate on the measuring itself. We are tempted to seek all meaning in ontic measuring — and it’s no surprise that this ultimately leaves us disappointed and frustrated, drowned in carefully calibrated details....
But how are we supposed to measure how wise or prudent we are in choosing the instruments of measurement and interpreting the findings? Modern literature is full of references to the dehumanizing side of measurement, as exemplified by the character Thomas Gradgrind in Dickens’s “Hard Times,” a dry rational character who is “ready to weigh and measure any parcel of human nature, and tell you exactly what it comes to,” yet loses track of his own life.
How can we keep an eye on the difference between ontic and ontological measurement, and prevent the one from interfering with the other?
One way is to ask ourselves what is missing from our measurements.... In our increasingly quantified world, we have to determine precisely where and how our measurements fail to deliver.
October 24, 2011 in Guideline sentencing systems, Science, Who decides | Permalink
TrackBack
TrackBack URL for this entry:
https://www.typepad.com/services/trackback/6a00d8341c8ccf53ef0153928d6f82970b
Listed below are links to weblogs that reference "Measurement and Its Discontents" ... and modern sentencing laws and guidelines:
Comments
I thought this was quite an interesting article, partly because I have pondered this exact topic a lot throughout the years. For me personally though, it was always in regards to an educational setting. How do we measure intelligence? Book smarts vs. street smarts? How do we factor in Gardner’s nine multiple intelligences? (http://web.cortland.edu/andersmd/learning/MI%20Theory.htm) What are standardized tests really supposed to “measure?” What can artificial intelligence and animal intelligence teach us?
While this may have gone a bit off topic, at the same time I believe that just because someone deems a measurement to be the be-all and end-all does not deem it so. Why should one person’s measurement be any better than someone else’s? Can’t a measurement change over time or even be applied differently on a case-by-case basis given certain circumstances? There are so many varying factors that play a part in measuring that to box in certain requirements is almost ludicrous. Some things just cannot be measured.
In attempting to link this notion up with sentencing, just because the U.S. Sentencing Commission has deemed so many points should be added for such and such an aggravated factor, why should all judges have to follow this “recipe” if they don’t believe it to be just? Don’t you think there was a lot of debate among the Commission members themselves when they first discussed the Guidelines on what gets increased and what doesn’t? What to include and what not to? Sure… and I’m sure it’s still the case nowadays. Nothing has changed. Great minds can and do differ. It’s just the winner writes the history, and that’s what we see here in the Guidelines. The members who thought differently, presumably in the minority, have had their voices silenced and their rationales stifled, and for what? All because it did not fit within the popular theory of the rest of the Commission? Basically yes, but it still doesn’t mean they were wrong, just that it wasn’t the most well-liked method. But what can you do? Their purpose was to produce a Guideline for the rest of the country, and regardless if something was a little off, as long as it was applied uniformly throughout the country, who could say there was a disparity? Well, we can of course!
Posted by: Isabella | Oct 24, 2011 9:58:50 PM
Haven’t we tried the ‘ontological,’ value-based indeterminate sentencing structure? Didn’t we decide that it doesn’t work because it engenders sentencing disparity, impugns sentencing legitimacy, opens the door for (sub)conscious discrimination in sentencing, etc.? I agree that ontic measurements, in the context of sentencing guidelines, are sterile and fail to fully account for complex human behavior, unqiue circumstances, etc.
But, isn’t that the whole issue? Subjective, indeterminate sentencing opens the door to biases and sentencing disparity; every judge has had a different life experience, and is thus predisposed to a unique perspective on sentencing. For example, a white judge who the father of an 18 year old college student will sentence a college-going drug user differently that he will a minority college drop-out charged with drug use.
My point is that people, including judges, are prone to abusing power- the more power a person has, the more abuse is likely to follow (power corrupts, absolute power corrupts absolutely). So, while ontic measurements might not take into consideration all of the factors that unrestrained, human judgment can, ontic measurements prevent other considerations which are more insidious than numerical ‘dehumanization,’ such as: racial discrimination, sentencing disparity, lessened sentencing legitimacy, etc.
Furthermore, this article uses IQ tests as an example of how ontic measurements only account for a ‘single quantity,’ without considering other important metrics (‘the complex ideal’ of human intelligence). This parallel fails in the context of sentencing guidelines, when you consider the substantial number of facets the guidelines consider (including varying degrees of recidivism, involvement in the crime, levels of severity and risk of violence, etc.). So I would say that the guidelines provide precisely what ontic measurements like IQ tests are missing; a wide survey of measurements which combine every variable (sterile as the variables may be) which inform the retributivist and utilitarian efficacy of an offender’s punishment.
Posted by: Will Herbert | Oct 25, 2011 11:52:25 AM
To a large degree, the creation of the federal sentencing guidelines was a rejection of the ontological sentencing structure. The old federal system was disfavored because it resulted in disparity and unfair discrimination. The guidelines were perceived as somehow more accurate due to the wide variety of factors used to create the measurement system. But despite all the factors taken into consideration, the guidelines are still just one ontic measure, one test from the perspective of the commission, one strange little chart. While the guidelines minimized sentencing disparity and unfair discrimination, they, at least in my opinion, failed to do justice to the individual offender, to provide them with a punishment that fits the criminal.
So with Booker, we have advisory guidelines, which provide a framework for judicial decisions. Yet judges can sentence as they choose, provided the sentences are reasonable. It seems that on average, according to the most recent Sentencing Commission Report, 54% of judges do in fact follow the guideline recommendations.
Is this not "keep[ing] an eye on the difference between ontic and ontological measurement, and prevent[ing] the one from interfering with the other?" The best of both worlds?
Posted by: Melissa W | Oct 25, 2011 11:40:26 PM
Interesting prison activity: http://online.wsj.com/article/SB10001424052970203911804576653670533388508.html?mod=e2fb
Posted by: Heather Williams | Oct 27, 2011 11:19:21 AM
I find the point this author made very interesting and it aligns with one of the reasons I am against mandatory minimums and support some forms discretion. Two defendants that commit the same crime do not necessarily deserve the same sentence based on varying circumstances and underlying facts. Mandatory Minimums make this harder to achieve as the prescribe a minimum sentence for certain crime ignoring circumstances where this minimum sentence is not warranted. In addition, having this floor is likely to increase the sentences given to even those who are not sentenced at the minimum level. In addition, the mandatory minimum guidelines give certain factors to consider when sentencing and ignore others. I think these factors which are ignored represent the “ontological” measuring that we have ignored.
Posted by: Carmen Smith | Nov 3, 2011 1:20:56 PM
The comments to this entry are closed.
Recent Comments