As is well-noted throughout NBA circles and is logically discernible, transition possessions on average produce more points than the mean possession. This season has operated no differently.
So far this season, transition possessions have produced an average of 1.1 points/possession, whereas a non-transition possession produces an average of 1.03 points. Thus, a 0.07 point/possession difference is quite significant when extrapolated to hundreds of possessions per game.
Evidently, an increase in transition possessions is immensely beneficial offensively. However, the argument is not so singularly-faceted. A corollary of an increase in transition possessions is a decrease in turnovers. In fact, on average, there are 1.86 more turnovers per 100 possessions on non-transition possessions versus transition possessions.
Antithetical to common perception, though turnovers are the bane of any team’s offense, they actually help that same team’s defense on the other end.
In the modern “small-ball” game, in which teams place an emphasis on agility over size, transition is created not from turnovers, but from missed field goal attempts. An emerging, pervasive trend as a corollary of “small-ball” is that when a shot is taken, the two or three players vying for a rebound remain near the basket, while the others take off towards the other end, disorganizing the opponents’ half-court sets and thus resulting in “transition”.
Transition then occurs when defenses are unable to match men in the offensive half and are outnumbered. Because the majority of turnovers are dead-ball turnovers, this allows the defense to get set up on the other end, eliminating the possibility of transition.
Due to this, teams score on average only 1.001 points on possessions following a turnover from the opposing team, a mark about 0.03 points less than any given non-transition possession.
Thus, when factoring in average leaguewide frequencies (16% transition frequency, 14.2% non-transition turnover frequency), transition possessions only provide a net gain of 0.07 points over the average possession.
In order to consider a holistic perspective of the effect of transition, the vital factors of transition play must be recognized. Functionally, a given team’s effectiveness in transition comes down to two factors: their efficiency (offensively and defensively) and their volume.
To first make mention of the offensive efficiency, consult Figure 1.
Observing a pure scatter plot of these variables charted against each other, there is very little correlation. The slope, when standardized, is 0.089, far too minimal to place any stock into the relation.
The lack of linear relation between two variables that logically should be related necessitates another form of analysis. To alternatively analyze this relation, a grouping or tier method can be used. However, grouping requires a third, evaluative variable.
For this assessment, in order to take a holistic view of transition’s effect, that can take either the form of winning percentage or net rating. To adjust for luck and happenstance (will be evaluated in greater detail in a later installment), net rating was chosen.
Additionally, to again make the variables relative to pace, a ternary plot was used. For those unfamiliar with ternary plotting, as it is a relatively rare method, it essentially plots three variables as sum-to-one percentages of each other. For instance, if I have the data (1,2,3) that I wanted to express as one point, 1 would be expressed on the graph as 0.17, 2 as 0.33, and 3 as 0.5.
However, this brings up the problem of different size, and therefore weight, among different measurements. This can simply and most effectively be resolved through replacing the raw data values with percentile rankings. That is, if Team X was at the median transition frequency in the league, their data point for transition frequency would be 0.5.
When we develop a ternary plot according to these variables and specifications, we receive the following output:
The problem with plotting points as individuals relative to a greater composite is the potential for visually misconstrued data. That is, in the above plot, a team ranking 10th percentile in all categories would appear the same as a team ranking in the 100th percentile in all categories. To identify such occurrences, plot the three variables against each other using contour mapping, and thus identify the occurrences.
Each yellow point is stationed at the point each multiple of ten on each axis intersects. Consulting the heat map indicating the percentiles in points/possession, there is no such “triple” or near-“triple”, making this fear void.
However, this method cannot account for identically ratioed sequences of different values. To identify these situations, a much easier method can be employed: standard deviation. First, determining groups through hierarchical clustering, we derive the annotated ternary below:
We can logically determine 7 groups (organized in reverse order of frequency expresses as sum to one percentage), with an abnormally extreme outlier (group 8) grouped individually. From these groups we can deduce the following tables:
|Group||Pace SD||FREQ SD||Points/Possessions SD|
As % of median
|Group||Pace SD||FREQ SD||Points/Possessions SD||Mean SD|
As % of max
|Group||Pace SD||FREQ SD||Points/Possessions SD||Mean SD|
Because the data is expressed as percentiles, there is a minimum deviation (sample size dependent). Thus, this has been subtracted from the calculated standard deviation before being expressed as a percentage of the maximum possible standard deviation.
To evaluate the validity of each grouping, the alternative hypothesis is assumed, and each mean deviation is plotted on a curve. The mean of the percentiles of each group on the various curves is then taken as a percentage of the expected mean of the curve, and graded out of ten where 10 is a perfect grouping and 1 is the worst possible grouping. This process will be modeled in a later installment, and the results are depicted below in Table 4 and Figure 6.
From Figure 5, both a weighting hierarchy of groups and the suitability of groups can be derived. Results obtained from groups 3 and 6 can be assessed with high confidence. Similarly, trends observed for groups 2 and 5 can be gauged with relative certainty. However, because of the variance of the population in groups 1, 4, and 7, for these groups the null hypothesis is assumed, and they are omitted from any discrete or cogent analysis. Laying the foundation for substantive analysis, the area of optimal results can thus be elicited in Figure 6.
With a three variable system containing two bound dependent variables, the independent variable is paid no consideration when determining the optimal zone. Thus, in Figure 6, pace is largely irrelevant as the independent variable, and accordingly the optimal zone is determined by efficiency (Points/Possessions) and volume (FREQ).
The optimal zone can be deduced to be the delineated region as at the extremities of the regions teams are either producing few points per transition possession (still above the average eFG%) with a high volume or are producing many points per transition possession with very low volume.
As is the assumption, teams in this zone conglomerate towards the middle of the zone, with a frequency (FREQ) range of 0.275 to 0.5 (expressed as sum to one percentage).
To assess the certainty of the zone of optimality, the groups are distinguished by net rating in Figure
From Figure 7, we can clearly delimit group 3 as the dominant group, and thus giving credence to the relation between the zonal distribution and offensive efficiency.
In the next installment, we’ll dig deeper into the theory behind
the zonal distribution and the function we can use to model it, identifying
from this function other data points omitted from the zone by virtue of the