Jump to content
Sturgeon's House

Recommended Posts

The full title of this work is "Weaponeering - Conventional Weapon System Effectiveness" by Morris Driels, who teaches at the USN Postgraduate School, and the cover of the edition I have in hand can be seen below.



The book aims to "describe and quantify the methods commonly used to predict the probably of successfully attacking ground targets using air-launched or ground-launched weapons", including "the various methodologies utilized in operational products used widely in the [US military]." Essentially, this boils down to a series of statistical methods to calculate Pk and Ph for various weapons and engagements. 

The author gave the book to my mother, who was a coworker of his at the time, and is of the opinion that Driels is not as smart as he perceives himself to be. But, hey, it's worth a review for friends.

I will unfortunately be quite busy in the next few days, but I have enough spare time tonight to begin a small review of a chapter. I aim to eventually get a full review of the piece done.

Our dear friends @Collimatrix and @N-L-M requested specifically chapter 15 covering mines, and chapter 16 covering target acquisition.

Chapter 15


The mine section covers both land mines and sea mines, and is split roughly in twain along these lines.

The land mine section begins with roughly a page of technical description of AT vs AP, M-Kill vs K-Kill, and lists common US FAmily of SCatterably Mines (FASCAM) systems. The section includes decent representative diagrams. The chapter then proceeds to discuss the specification and planning of minefields, beginning with the mean effective diameter of a mine. Driels discusses a simplified minefield method based on mine density, and then a detailed method.

The simplified method expresses the effectiveness of the minefield as a density value. Diels derives for the release of unitary mines from aircraft

NMines = Fractional coverage in range * fractional coverage in deflection * number of mines released per pass * reliability * number of passes

and for cluster type

NMines = FRange * FDefl * NDispensers * Reliability dispenser * NMines per Dispenser * Reliability Submunition * number of passes

and then exploits the evident geometry to express the Area and Frontal densities. Most useful is the table of suggested minefield densities for Area Denial Artillery Munition and Remote Anti-Armor Mine System, giving the Area and Linear densities required to Disrupt, Turn, Fix, and Block an opponent. 

Whereas the simplistic method expresses effectiveness as a density, the detailed model views the targets and mines individually, assuming the targets are driving directly through the minefield perpendicular to the width and that there is only one casualty and no sympathetic detonations per detonation. The model computes the expected number of targets destroyed by the minefield, beginning with the Mean Effective Diameter and the PEncounter based on distance from the mine. 

Driels derives the number of mines encountered which will be encountered, not avoided, and will engage the target. I can't be arsed to type the equations in full, so here you go.


The section concludes with an example calculation using the detailed mine method. Overall, this shows the strengths and weaknesses of the book fairly well - it is a reasonable derivation of open-source statistical methods for predicting Pk and Ph and the number of sorties required, but US-specific and limited in scope and depth. 

The treatment of Sea Mines  begins by describing the various types and uses of said mines, importantly noting that they have both defensive and offensive uses, and that the presence of the threat of mines is equally important as the actual sinking which occurs. There are three classifications of sea mines, contact, influence, and controlled.

Shallow water mines are treated trivially, considering them equivalent to land mines with Blast Diameter in the place of MED, and assuming that the mines cannot be avoided.

Deep water mines are approached in a similar manner, with the desire to determine the number of mines needed to achieve the required probability of damage, and planning missions from there. Two features of sea mines must be considered, however - mine actuation by passing of the target, and mine damage to the target. The probability of activation is, unfortunately, dependent on the depth of the mine and distance, forming a series of stacked bowls as below.

The mean value of PActivation is the statistical expectation of the curve. Because I don't feel like screencapping another equation, the Width of Seaway where an actuation can occur is qualitatively merely the area under the actuation curve calculated for a specific mine and target combo.

The damage function is also of interest - because we require the mine to both actuate and damage the target, this limits our earlier area under the curve to that area integrated to the limits of the damage function. The selection of mine sensitivity plays a very large role in the effectiveness of our mines. A high setting will lead to many more actuations than damages, which can be indicated by the ratio of the actuation area and the damage area from earlier. Setting the actuation distance equal to the damage distance means that every actuation causes damage, but the probability of actuation is only around 42%. The compromise which selects some Areadamage / Areaactuation of around .8 to .93 is generally preferred. This gives us several useful terms -

PA+D = Reliability * Areadamage / Widthminefield . The probability that the first ship to transit a minefield is referred to as the threat, or

Threat T = 1 - (1 - PA+D)^NMines = 1 - (1 - Reliability * Areadamage / Widthminefield ) which can obviously be solved for NMines to get the desired number of mines for a desired threat level.

Anti-submarine mines are an interesting subset of deep sea mines, as they turn the problem from two-dimensions to three. Driels accounts for this by replacing the mine damage width with the mine damage area, to no one's surprise. Driels claims that the probability of actuation and damage is 

PA/D =  Damage Area / (Width * Depth of minefield). Despite my initial confusion, the reliability term safely reappears in the threat definition below.

T = 1 - (1 - (Reliability * Area damage)/(Width * Depth of minefield))^NMines, with a solution for number of mines for given threat level fairly easily taken out as before.

Lastly, there is a summary of topics for each chapter, though unfortunately they are qualitative descriptions. Including the final derived equations in this part would be a major benefit, but is overlooked. Ah well. They are quite good for review or refreshing the material.

As before, this is a relatively interesting if shallow engagement with the statistical methods to calculate Pk and Ph and the number of sorties required. Going more into detail regarding selecting Threat values or common (unclass) parameters would be interesting, but is lacking. Assuming I don't slack off tomorrow, I should have most or all of the Target Acquisition chapter covered.

Link to post
Share on other sites
  • 4 weeks later...

I finally got around to doing a review of the Acquisition Chapter. Apologies to @N-L-M for the delay, and apologies in advance for the length.


This topic comes at perhaps an opportune time given a previous conversation with @Sturgeon regarding the Survivability/Lethality Onion.


In this model of lethality, a target must be

First, Seen

Once Seen, Acquired

Once Acquired, Hit

Once Hit, Penetrated

Once Penetrated, Destroyed


With survivability obviously working in reverse. This model has proven quite robust and is extremely popular amongst those studying weapon systems and their survivability. Its terms are generally self explanatory, though it's worth clarifying the first two. Seeing refers to the perception of any given object via some form of sensor. Acquisition, then, refers to processing the perceived object *as a target*, by some form of identification or feature recognition.


That pretentiousness aside, Driels in this chapter will address both topics; the US Joint Munitions Effectiveness Manual (JMEM) considers them in three parts: Target Detection, Target Recognition, and Target Identification under the titular Target Acquisition model. Detection under JMEM’s model correlates most closely with “See” under the Onion, with the remaining two falling under Acquiring the target.


Driels begins by outlining several historical physiological and psychological models for Target Detection, before describing the work by Johnson which is at the core of the US Army’s Acquire model. Terrain, run-in effects, and conversion of range to probability of launch are accounted for, and the factors are combined with the Acquire model to describe the JMEM’s Target Acquisition model.


Unlike the prior chapter, this chapter is much more explicitly focused on Air-to-Surface weaponeering. While the physiological and sensor models will obviously hold true for a variety of detectors, they are used here exclusively to create a model for the range and probability at which an aircraft will detect some ground target. Terrain masking is equally applicable to cases beyond that of Air-to-Surface, but such things as run-in effects and the minimum time taken to bring an aircraft to a required heading are limited in their application.


Published in 1970, the JMEM Air-to-Surface Target Acquisition Manual divides the acquisition process into the aforementioned three steps of detection, recognition, and identification. Their exact definitions have been included here to preserve the granularity.


Blackwell’s research during WW2 began with experiments into the contrast values required to just discriminate circles projected on a screen. Importantly, target size in Blackwell's model is angular in a fashion evident to anyone familiar with MOA.  Beginning with a definition of contrast as

C = abs val of (Luminance of Stimulus - Luminance of Background)/(Luminance of Background)


And of relative contrast as

Cr = actual target contrast / threshold contrast

This definition becomes clear when one examines the situation where Cr = 1, wherein threshold conditions apply and detection probability is 50%. A table of threshold contrast values has been included here, followed by detection probability as a function of the relative contrast.

Practically speaking, to predict detection with this model would require calculating the angular size of the target, then calculating the actual contrast of the target, looking up the threshold contrast, calculating the relative value, and finally determining the probability of detection with the graph. Evidently a complex and lengthy process, these limitations motivated the creation of further detection models.

Overington’s model seeks to correlate the target size and target contrast to the point at which the target is just detectable. This begins with the assumption that the target generates a stimulus between two adjacent retinal receptors between which the boundary of the target and background is located, the magnitude of which will obviously depend on the  magnitude of the contrast. Through a complex series of equations that do not bear reiterating, a relationship is drawn between

.163 * Contrast = K1 * nReceptors + δ

Where K1 is some constant and δ is the minimum stimulus the brain can detect. From this equation, a threshold contrast value can later be obtained. A great deal of care is paid to the amount of receptors which will be stimulated - the minimum even when seeing very small objects (eg stars) is cited as nine.To account for this, a value of

nReceptors = 9.9[(height + width)^2 + .83]^.5 is derived, where height and width of the object are in mrads.


Overington then solves K1 and δ experimentally; they depend on the retinal luminance, which is itself dependent on the scene luminance and the pupil diameter. These equations are not directly solved, and the reader will have to content themselves with the relation of

K1 = 15.4 Retinal Luminance-.5 + .48
δ = .00125 Retinal Luminance -.5 + .0004

With Retinal Luminance equal to pi*pupil diameter squared * scene luminance * 1/4

We can obviously now simplify our earlier equation into

Contrast Threshold = (K1 * nReceptors + δ)/.163

Which is in good agreement with the Blackwell model from earlier. It would later be discovered, however, that these models under-predicted the threshold contrast luminescence. Testing conducted by Johnson in the 1950s wherein observers viewed the side of an M48 (a tank not know for it’s small size, as N-L-M will doubtless attest) showed that the threshold value was higher by around a factor of three compared to the Blackwell and Overington models.


My only brief complaint with this section is that it would benefit from a lengthier comparison between the predicted values and the empirical values for threshold contrast. The history and physiology is interesting, of course, if somewhat dry if all we are given is a simple “it does not work by this factor”.


Johnson’s Frequency-Domain Experiments grew out of these simple “detection” tests, beginning with the fact that mere detection is not sufficient for many military tasks, and that neither a threshold data nor model existed for the tasks of recognition or identification. Initial experiments showed that a nonlinear scaling existed of contrast required with range, which led Johnson to model targets in a frequency rather than spatial domain, best explained visually here.

Each pair of black and white (practically, gray and dark gray) lines is a cycle, and the cycles per milliradian is the cycle frequency. The equivalence between a cycle frequency and a target is constructed as follows.

1. A small image of a military target is projected onto a black screen.
2. An observer is rolled into a position where he can just detect that there is an object on the screen.

3. The image is replaced with a rectangle made of very high cycle frequency bars. The frequency is reduced until the observer can just determine the number of bars.

4. The rectangle is replaced by the image, and the observer is wheeled forward until he just recognizes it. Step 3 is repeated.

5. Step 4 then 3 are repeated with the observer having to identify the target.


The procedure's results have been included here.


This is a strikingly robust and useful model, and has proven sound even when applied to a number of passive sensors such as FLIRs, TVs, and image intensifiers. With it, we can predict acquisition ability of a sensor by measuring its ability to resolve contrast modulated bar patterns. In this passage, Driels discusses an extremely fascinating way of looking at sensors, in a method that’s surprisingly easy to follow given his early work in the chapter. The model seems so simple and robust that one questions why the earlier models are even included, as the Acquire Model to soon be discussed uses Johnson’s work rather than the earlier physiological models.

The US Army’s Acquire Model makes use of Johnson’s Frequency-Domain work, while accounting for significantly more factors. The model begins by calculating the critical dimension (sqrt of the presented area) in mrads, and then selecting an intrinsic contrast value based on the illumination, background, detector, and filters. The attenuation due to atmospheric factors is also taken into account, though the JMEM model only accounts of distance and meteorological factors. Using these factors, the apparent contrast at the sensor is calculated, with

Apparent Contrast = Intrinsic Contrast * Sky to Ground Luminance * e(-3.91 * Range Kilometers / VIS)

VIS represents the atmospheric visibility, and is defined as the range at which contrast is diminished to 2% of Intrinsic Contrast.

The sensor in question is then analyzed using Johnson’s method described earlier. A common measurement standard is a four-bar pattern which can be of varying frequency - ie, the bars can be very few or very many milliradians wide. For a given frequency, the illumination through the sensor is increased from zero until the bars are just able to be distinguished, and this value of contrast is paired with the frequency to construct a Minimum Resolvable Contrast curve. A particular value of frequency for a given apparent contrast on this curve is a Spatial Frequency, yielding

N cycles resolved = SF * critical dimension / Range

Acquire then features a probability for some task (either Detection, Recognition, or Identification) as a function of the ratio between the number of cycles resolved and the number required for a 50% chance of that task being accomplished, included here.


This is a very powerful result, and is again presented quite cleanly and clearly. I appreciate these two passages a great deal more than the earlier parts, especially since they seem most easily applicable to things outside the A2S realm.


Flight profile accounts for the fact that an aircraft does not always approach the target directly down the line of sight, and that several actions must occur for a successful attack even after the target is detected. The aircraft must decide to attack, must roll into and then execute a turn before exiting the turn and operating the weapon system - respectively XD, XRI, XRO, XOP, and RMIN in the diagram here. These will combine with the beginning kinematics and geometry - the turn radius of the aircraft r and nose angle ⍺ - to produce the following RRQ equation.

RRQ = (A cos ⍺ + r sin ⍺) +- Sqrt[ (A cos ⍺ + r sin ⍺ )^2 - (A^2 - B^2)]


This minimum range to maneuver and launch will be included further along in the model in addition to the maximum range for detection. An omission which may be deliberate is the possibility to reverse engineer a target approach to maximize the possibility of detecting the target in time to launch a weapon. 

Searching refers to the process of moving the sensor’s field of view, the solid angle which it can actually “see”, over the entirety of the solid angle the sensor is capable of moving, referred to as the field of regard. Driels places this towards the end of the chapter, but it appears best suited to address earlier. The US Army’s Acquire model expresses the probability of detection as
P = P1 x P2, where P1 is some time-independent probability of detecting the target, and P2 the conditional probability for some amount of time, best explained below.


Terrain has the effect of blocking almost all sensors used by aircraft, with the particular quandary that terrain can vary quite rapidly and unpredictably. (Something anyone attempting to learn land navigation can attest to.) Driels constructs a workable model for the angle at which an aircraft’s sensors are unmasked as follows - for a given “type” of terrain, place an observer at some random point. The observer measures the angle to the highest terrain feature along a given bearing, which is the unmask angle. This repeats this for the entire circle, producing a cumulative probability distribution of the unmask angles, and the process may be repeated for a variety of terrain types to any desired level of granularity.

The JMEM target acquisition model covers flat farmland, smooth desert, rolling farmland w/ close forests, rolling desert, flat farmland with close forests, gently rolling hills, rough desert, and sharply rolling hills with trees, though it should be evident that any particular terrain type could be easily calculated. The omission of any form of urban terrain is puzzling, however, and the question of which existing terrain to model it with is thought provoking. Perhaps sharply rolling hills with trees? That this 2004 book does not cover the acquisition of targets in urban terrain is no great discredit, as it has likely been accounted for in more recent versions of the JMEM acquisition model, but it certainly merits further discussion moving forward to a doubtless more urbanized battlefield.


There are now models in hand for the two major limits on range of detection, terrain and visibility, and from this Driels proceeds to construct a conversion between range and the probability of detection and launch.


This equation, and the cumulative probability that one can derive from it, accounts for not only the distance the aircraft must close to before detecting the target, but also the time taken to search the volume available to the sensor suite and the minimum time/distance required to maneuver the aircraft to launch position. The constant K accounts for the skill level of the pilot, Pmax is assumed to be 1, R is the smaller of the unmask range or the visibility detection range established earlier, RRQ is the minimum range to maneuver and drop. A series of calculations have been performed in the chart here - these seem to be far lower than occurs in reality, potentially due to the choice in parameters.  


Driels then details the usage of the JMEM Target Acquisition Model, a screen of which is included here. (Note that PL is significantly higher than his table earlier.) Inputs to the model can be taken from the Joint Air-to-Surface Weaponeering System (JAWS), as well as additional information regarding the target, vision conditions, weapon trajectory, and launcher kinematics - the latter two obviously determining RRQ.


In Summary, the chapter examines physiological models of detection by Blackwell and Johnson, addresses their implementation in the US Army’s Acquire model, and then details the Joint Munitions Effectiveness Manual’s use of Acquire and the additional information its model includes. This chapter is interesting and offers a great deal of unexploited potential - the models are all extremely fascinating, and I can easily imagine their direct applicability towards S2S or passive A2A detection. Crucially, however, the acquisition models all appear to completely negate *active* sensors, possibly for reasons of confidentiality. Still, the fundamentals behind the two-way radar equation aren’t that complex, and could easily be slotted into the existing maximum range of visibility parameter. Beyond that, it is an interesting chapter, and one of the most insightful in the book.

I think I'm done with the book for now. I may do some simulation of infantry fires by plagiarizing Driels' direct fire chapter, but that is a tale for another day.

Link to post
Share on other sites
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Content

    • By Toxn
      I think there may be an older version of this thread somewhere, but if so it's buried.
      Post short reviews of books you've read recently, along with any particular points of interest that may be pertinent to the other denizens of this forum.
    • By Collimatrix
      At the end of January, 2018 and after many false starts, the Russian military formally announced the limited adoption of the AEK-971 and AEK-973 rifles.  These rifles feature an unusual counterbalanced breech mechanism which is intended to improve handling, especially during full auto fire.  While exotic outside of Russia, these counter-balanced rifles are not at all new.  In fact, the 2018 adoption of the AEK-971 represents the first success of a rifle concept that has been around for a some time.

      Earliest Origins

      Animated diagram of the AK-107/108
      Balanced action recoil systems (BARS) work by accelerating a mass in the opposite direction of the bolt carrier.  The countermass is of similar mass to the bolt carrier and synchronized to move in the opposite direction by a rack and pinion.  This cancels out some, but not all of the impulses associated with self-loading actions.  But more on that later.

      Long before Soviet small arms engineers began experimenting with BARS, a number of production weapons featured synchronized masses moving in opposite directions.  Generally speaking, any stabilization that these actions provided was an incidental benefit.  Rather, these designs were either attempts to get around patents, or very early developments in the history of autoloading weapons when the design best practices had not been standardized yet.  These designs featured a forward-moving gas trap that, of necessity, needed its motion converted into rearward motion by either a lever or rack and pinion.

      The French St. Etienne Machine Gun

      The Danish Bang rifle
      At around the same time, inventors started toying with the idea of using synchronized counter-masses deliberately to cancel out recoil impulses.  The earliest patent for such a design comes from 1908 from obscure firearms designer Ludwig Mertens:

      More information on these early developments is in this article on the matter by Max Popenker.
      Soviet designers began investigating the BARS concept in earnest in the early 1970s.  This is worth noting; these early BARS rifles were actually trialed against the AK-74.

      The AL-7 rifle, a BARS rifle from the early 1970s
      The Soviet military chose the more mechanically orthodox AK-74 as a stopgap measure in order to get a small-caliber, high-velocity rifle to the front lines as quickly as possible.  Of course, the thing about stopgap weapons is that they always end up hanging around longer than intended, and forty four years later Russian troops are still equipped with the AK-74.

      A small number of submachine gun prototypes with a BARS-like system were trialed, but not mass-produced.  The gas operated action of a rifle can be balanced with a fairly small synchronizer rack and pinion, but the blowback action of a submachine gun requires a fairly large and massive synchronizer gear or lever.  This is because in a gas operated rifle a second gas piston can be attached to the countermass, thereby unloading the synchronizer gear.

      There are three BARS designs of note from Russia:


      The AK-107 and AK-108 are BARS rifles in 5.45x39mm and 5.56x45mm respectively.  These rifles are products of the Kalashnikov design bureau and Izmash factory, now Kalashnikov Concern.  Internally they are very similar to an AK, only with the countermass and synchronizer unit situated above the bolt carrier group.


      Close up of synchronizer and dual return spring assemblies

      This is configuration is almost identical to the AL-7 design of the early 1970s.  Like the more conventional AK-100 series, the AK-107/AK-108 were offered for export during the late 1990s and early 2000s, but they failed to attract any customers.  The furniture is very similar to the AK-100 series, and indeed the only obvious external difference is the long tube protruding from the gas block and bridging the gap to the front sight.
      The AK-107 has re-emerged recently as the Saiga 107, a rifle clearly intended for competitive shooting events like 3-gun.


      The rival Kovrov design bureau was only slightly behind the Kalashnikov design bureau in exploring the BARS concept.  Their earliest prototype featuring the system, the SA-006 (also transliterated as CA-006) also dates from the early 1970s.

      Chief designer Sergey Koksharov refined this design into the AEK-971.  The chief refinement of his design over the first-generation balanced action prototypes from the early 1970s is that the countermass sits inside the bolt carrier, rather than being stacked on top of it.  This is a more compact installation of the mechanism, but otherwise accomplishes the same thing.


      Moving parts group of the AEK-971

      The early AEK-971 had a triangular metal buttstock and a Kalashnikov-style safety lever on the right side of the rifle.

      In this guise the rifle competed unsuccessfully with Nikonov's AN-94 design in the Abakan competition.  Considering that a relative handful of AN-94s were ever produced, this was perhaps not a terrible loss for the Kovrov design bureau.

      After the end of the Soviet Union, the AEK-971 design was picked up by the Degtyarev factory, itself a division of the state-owned Rostec.

      The Degtyarev factory would unsuccessfully try to make sales of the weapon for the next twenty four years.  In the meantime, they made some small refinements to the rifle.  The Kalashnikov-style safety lever was deleted and replaced with a thumb safety on the left side of the receiver.

      Later on the Degtyarev factory caught HK fever, and a very HK-esque sliding metal stock was added in addition to a very HK-esque rear sight.  The thumb safety lever was also made ambidextrous.  The handguard was changed a few times.

      Still, reception to the rifle was lukewarm.  The 2018 announcement that the rifle would be procured in limited numbers alongside more conventional AK rifles is not exactly a coup.  The numbers bought are likely to be very low.  A 5.56mm AEK-972 and 7.62x39mm AEK-973 also exist.  The newest version of the rifle has been referred to as A-545.

      AKB and AKB-1



      AKB, closeup of the receiver

      The AKB and AKB-1 are a pair of painfully obscure designs designed by Viktor Kalashnikov, Mikhail Kalashnikov's son.  The later AKB-1 is the more conservative of the two, while the AKB is quite wild.

      Both rifles use a more or less conventional AK type bolt carrier, but the AKB uses the barrel as the countermass.  That's right; the entire barrel shoots forward while the bolt carrier moves back!  This unusual arrangement also allowed for an extremely high cyclic rate of fire; 2000RPM.  Later on a burst limiter and rate of fire limiter were added.  The rifle would fire at the full 2000 RPM for two round bursts, but a mere 1000 RPM for full auto.

      The AKB-1 was a far more conventional design, but it still had a BARS.  In this design the countermass was nested inside the main bolt carrier, similar to the AEK-971.

      Not a great deal of information is available about these rifles, but @Hrachya H wrote an article on them which can be read here.
    • By Collimatrix
      Tank design is often conceptualized as a balance between mobility, protection and firepower.  This is, at best, a messy and imprecise conceptualization.  It is messy because these three traits cannot be completely separated from each other.  An APC, for example, that provides basic protection against small arms fire and shell fragments is effectively more mobile than an open-topped vehicle because the APC can traverse areas swept by artillery fires that are closed off entirely to the open-topped vehicle.  It is an imprecise conceptualization because broad ideas like "mobility" are very complex in practice.  The M1 Abrams burns more fuel than the Leo 2, but the Leo 2 requires diesel fuel, while the omnivorous AGT-1500 will run happily on anything liquid and flammable.  Which has better strategic mobility?  Soviet rail gauge was slightly wider than Western European standard; 3.32 vs 3.15 meters.  But Soviet tanks in the Cold War were generally kept lighter and smaller, and had to be in order to be moved in large numbers on a rail and road network that was not as robust as that further west.  So if NATO and the Warsaw Pact had switched tanks in the late 1950s, they would both have downgraded the strategic mobility of their forces, as the Soviet tanks would be slightly too wide for unrestricted movement on rails in the free world, and the NATO tanks would have demanded more logistical support per tank than evil atheist commie formations were designed to provide.

      So instead of wading into a deep and subtle subject, I am going to write about something that is extremely simple and easy to describe in mathematical terms; the top speed of a tank moving in a straight line.  Because it is so simple and straightforward to understand, it is also nearly meaningless in terms of the combat performance of a tank.
      In short, the top speed of a tank is limited by three things; the gear ratio limit, the power limit and the suspension limit.  The tank's maximum speed will be whichever of these limits is the lowest on a given terrain.  The top speed of a tank is of limited significance, even from a tactical perspective, because the tank's ability to exploit its top speed is constrained by other factors.  A high top speed, however, looks great on sales brochures, and there are examples of tanks that were designed with pointlessly high top speeds in order to overawe people who needed impressing.

      When this baby hits 88 miles per hour, you're going to see some serious shit.
      The Gear Ratio Limit
      Every engine has a maximum speed at which it can turn.  Often, the engine is artificially governed to a maximum speed slightly less than what it is mechanically capable of in order to reduce wear.  Additionally, most piston engines develop their maximum power at slightly less than their maximum speed due to valve timing issues:

      A typical power/speed relationship for an Otto Cycle engine.  Otto Cycle engines are primitive devices that are only used when the Brayton Cycle Master Race is unavailable.
      Most tanks have predominantly or purely mechanical drivetrains, which exchange rotational speed for torque by easily measurable ratios.  The maximum rotational speed of the engine, multiplied by the gear ratio of the highest gear in the transmission multiplied by the gear ratio of the final drives multiplied by the circumference of the drive sprocket will equal the gear ratio limit of the tank.  The tank is unable to achieve higher speeds than the gear ratio limit because it physically cannot spin its tracks around any faster.
      Most spec sheets don't actually give out the transmission ratios in different gears, but such excessively detailed specification sheets are provided in Germany's Tiger Tanks by Hilary Doyle and Thomas Jentz.  The gear ratios, final drive ratios, and maximum engine RPM of the Tiger II are all provided, along with a handy table of the vehicle's maximum speed in each gear.  In eighth gear, the top speed is given as 41.5 KPH, but that is at an engine speed of 3000 RPM, and in reality the German tank engines were governed to less than that in order to conserve their service life.  At a more realistic 2500 RPM, the mighty Tiger II would have managed 34.6 KPH.
      In principle there are analogous limits for electrical and hydraulic drive components based on free speeds and stall torques, but they are a little more complicated to actually calculate.

      Part of the transmission from an M4 Sherman, picture from Jeeps_Guns_Tanks' great Sherman website
      The Power Limit
      So a Tiger II could totally go 34.6 KPH in combat, right?  Well, perhaps.  And by "perhaps," I mean "lolololololol, fuck no."  I defy you to find me a test report where anybody manages to get a Tiger II over 33 KPH.  While the meticulous engineers of Henschel did accurately transcribe the gear ratios of the transmission and final drive accurately, and did manage to use their tape measures correctly when measuring the drive sprockets, their rosy projections of the top speed did not account for the power limit.
      As a tank moves, power from the engine is wasted in various ways and so is unavailable to accelerate the tank.  As the tank goes faster and faster, the magnitude of these power-wasting phenomena grows, until there is no surplus power to accelerate the tank any more.  The system reaches equilibrium, and the tank maxes out at some top speed where it hits its power limit (unless, of course, the tank hits its gear ratio limit first).
      The actual power available to a tank is not the same as the gross power of the motor.  Some of the gross horsepower of the motor has to be directed to fans to cool the engine (except, of course, in the case of the Brayton Cycle Master Race, whose engines are almost completely self-cooling).  The transmission and final drives are not perfectly efficient either, and waste a significant amount of the power flowing through them as heat.  As a result of this, the actual power available at the sprocket is typically between 61% and 74% of the engine's quoted gross power.
      Once the power does hit the drive sprocket, it is wasted in overcoming the friction of the tank's tracks, in churning up the ground the tank is on, and in aerodynamic drag.  I have helpfully listed these in the order of decreasing importance.
      The drag coefficient of a cube (which is a sufficiently accurate physical representation of a Tiger II) is .8. This, multiplied by half the fluid density of air (1.2 kg/m^3) times the velocity (9.4 m/s) squared times a rough frontal area of 3.8 by 3 meters gives a force of 483 newtons of drag.  This multiplied by the velocity of the tiger II gives 4.5 kilowatts, or about six horsepower lost to drag.  With the governor installed, the HL 230 could put out about 580 horsepower, which would be four hundred something horses at the sprocket, so the aerodynamic drag would be 1.5% of the total available power.  Negligible.  Tanks are just too slow to lose much power to aerodynamic effects.
      Losses to the soil can be important, depending on the surface the tank is operating on.  On a nice, hard surface like a paved road there will be minimal losses between the tank's tracks and the surface.  Off-road, however, the tank's tracks will start to sink into soil or mud, and more power will be wasted in churning up the soil.  If the soil is loose or boggy enough, the tank will simply sink in and be immobilized.  Tanks that spread their weight out over a larger area will lose less power, and be able to traverse soft soils at higher speed.  This paper from the UK shows the relationship between mean maximum pressure (MMP), and the increase in rolling resistance on various soils and sands in excruciating detail.  In general, tanks with more track area, with more and bigger road wheels, and with longer track pitch will have lower MMP, and will sink into soft soils less and therefore lose less top speed.
      The largest loss of power usually comes from friction within the tracks themselves.  This is sometimes called rolling resistance, but this term is also used to mean other, subtly different things, so it pays to be precise.  Compared to wheeled vehicles, tracked vehicles have extremely high rolling resistance, and lose a lot of power just keeping the tracks turning.  Rolling resistance is generally expressed as a dimensionless coefficient, CR, which multiplied against vehicle weight gives the force of friction.  This chart from R.M. Ogorkiewicz' Technology of Tanks shows experimentally determined rolling resistance coefficients for various tracked vehicles:

      The rolling resistance coefficients given here show that a tracked vehicle going on ideal testing ground conditions is about as efficient as a car driving over loose gravel.  It also shows that the rolling resistance increases with vehicle speed.  A rough approximation of this increase in CR is given by the equation CR=A+BV, where A and B are constants and V is vehicle speed.  Ogorkiewicz explains:
      It should be noted that the lubricated needle bearing track joints of which he speaks were only ever used by the Germans in WWII because they were insanely complicated.  Band tracks have lower rolling resistance than metal link tracks, but they really aren't practical for vehicles much above thirty tonnes.  Other ways of reducing rolling resistance include using larger road wheels, omitting return rollers, and reducing track tension.  Obviously, there are practical limits to these approaches.
      To calculate power losses due to rolling resistance, multiply vehicle weight by CR by vehicle velocity to get power lost.  The velocity at which the power lost to rolling resistance equals the power available at the sprocket is the power limit on the speed of the tank.
      The Suspension Limit
      The suspension limit on speed is starting to get dangerously far away from the world of spherical, frictionless horses where everything is easy to calculate using simple algebra, so I will be brief.  In addition to the continents of the world not being completely comprised of paved surfaces that minimize rolling resistance, the continents of the world are also not perfectly flat.  This means that in order to travel at high speed off road, tanks require some sort of suspension or else they would shake their crews into jelly.  If the crew is being shaken too much to operate effectively, then it doesn't really matter if a tank has a high enough gear ratio limit or power limit to go faster.  This is also particularly obnoxious because suspension performance is difficult to quantify, as it involves resonance frequencies, damping coefficients, and a bunch of other complicated shit.
      Suffice it to say, then, that a very rough estimate of the ride-smoothing qualities of a tank's suspension can be made from the total travel of its road wheels:

      This chart from Technology of Tanks is helpful.  A more detailed discussion of the subject of tank suspension can be found here.
      The Real World Rudely Intrudes
      So, how useful is high top speed in a tank in messy, hard-to-mathematically-express reality?  The answer might surprise you!

      A Wehrmacht M.A.N. combustotron Ausf G
      We'll take some whacks at everyone's favorite whipping boy; the Panther.
      A US report on a captured Panther Ausf G gives its top speed on roads as an absolutely blistering 60 KPH on roads.  The Soviets could only get their captured Ausf D to do 50 KPH, but compared to a Sherman, which is generally only credited with 40 KPH on roads, that's alarmingly fast.
      So, would this mean that the Panther enjoyed a mobility advantage over the Sherman?  Would this mean that it was better able to make quick advances and daring flanking maneuvers during a battle?
      In field tests the British found the panther to have lower off-road speed than a Churchill VII (the panther had a slightly busted transmission though).  In the same American report that credits the Panther Ausf G with a 60 KPH top speed on roads, it was found that off road the panther was almost exactly as fast as an M4A376W, with individual Shermans slightly outpacing the big cat or lagging behind it slightly.  Another US report from January 1945 states that over courses with many turns and curves, the Sherman would pull out ahead because the Sherman lost less speed negotiating corners.  Clearly, the Panther's advantage in straight line speed did not translate into better mobility in any combat scenario that did not involve drag racing.
      So what was going on with the Panther?  How could it leave everything but light tanks in the dust on a straight highway, but be outpaced by the ponderous Churchill heavy tank in actual field tests?

      Panther Ausf A tanks captured by the Soviets
      A British report from 1946 on the Panther's transmission explains what's going on.  The Panther's transmission had seven forward gears, but off-road it really couldn't make it out of fifth.  In other words, the Panther had an extremely high gear ratio limit that allowed it exceptional speed on roads.  However, the Panther's mediocre power to weight ratio (nominally 13 hp/ton for the RPM limited HL 230) meant that once the tank was off road and fighting mud, it only had a mediocre power limit.  Indeed, it is a testament to the efficiency of the Panther's running gear that it could keep up with Shermans at all, since the Panther's power to weight ratio was about 20% lower than that particular variant of Sherman.
      There were other factors limiting the Panther's speed in practical circumstances.  The geared steering system used in the Panther had different steering radii based on what gear the Panther was in.  The higher the gear, the wider the turn.  In theory this was excellent, but in practice the designers chose too wide a turn radius for each gear, which meant that for any but the gentlest turns the Panther's drive would need to slow down and downshift in order to complete the turn, thus sacrificing any speed advantage his tank enjoyed.
      So why would a tank be designed in such a strange fashion?  The British thought that the Panther was originally designed to be much lighter, and that the transmission had never been re-designed in order to compensate.  Given the weight gain that the Panther experienced early in development, this explanation seems like it may be partially true.  However, when interrogated, Ernst Kniepkamp, a senior engineer in Germany's wartime tank development bureaucracy, stated that the additional gears were there simply to give the Panther a high speed on roads, because it looked good to senior generals.
      So, this is the danger in evaluating tanks based on extremely simplistic performance metrics that look good on paper.  They may be simple to digest and simple to calculate, but in the messy real world, they may mean simply nothing.
    • By Collimatrix
      But if you try sometimes...

      Fighter aircraft became much better during the Second World War.  But, apart from the development of engines, it was not a straightforward matter of monotonous improvement.  Aircraft are a series of compromises.  Improving one aspect of performance almost always compromises others.  So, for aircraft designers in World War Two, the question was not so much "what will we do to make this aircraft better?" but "what are we willing to sacrifice?"

      To explain why, let's look at the forces acting on an aircraft:

      Lift is the force that keeps the aircraft from becoming one with the Earth.  It is generally considered a good thing. 
      The lift equation is L=0.5CLRV2A where L is lift, CL is lift coefficient (which is a measure of the effectiveness of the wing based on its shape and other factors), R is air density, V is airspeed and A is the area of the wing.

      Airspeed is very important to an aircraft's ability to make lift, since the force of lift grows with the square of airspeed and in linear relation to all other factors.  This means that aircraft will have trouble producing adequate lift during takeoff and landing, since that's when they slow down the most.
      Altitude is also a significant factor to an aircraft's ability to make lift.  The density of air decreases at an approximately linear rate with altitude above sea level:

      Finally, wings work better the bigger they are.  Wing area directly relates to lift production, provided that wing shape is kept constant.

      While coefficient of lift CL contains many complex factors, one important and relatively simple factor is the angle of attack, also called AOA or alpha.  The more tilted an airfoil is relative to the airflow, the more lift it will generate.  The lift coefficient (and thus lift force, all other factors remaining equal) increases more or less linearly until the airfoil stalls:

      Essentially what's going on is that the greater the AOA, the more the wing "bends" the air around the wing.  But the airflow can only become so bent before it detaches.  Once the wing is stalled it doesn't stop producing lift entirely, but it does create substantially less lift than it was just before it stalled.  

      Drag is the force acting against the movement of any object travelling through a fluid.  Since it slows aircraft down and makes them waste fuel in overcoming it, drag is a total buzzkill and is generally considered a bad thing.

      The drag equation is D=0.5CDRV2A where D is drag, CD is drag coefficient (which is a measure of how "draggy" a given aircraft is), R is air density, V is airspeed and A is the frontal area of the aircraft.

      This equation is obviously very similar to the lift equation, and this is where designers hit the first big snag.  Lift is good, but drag is bad, but because the factors that cause these forces are so similar, most measures that will increase lift will also increase drag.  Most measures that reduce drag will also reduce lift.

      Generally speaking, wing loading (the amount of wing area relative to the plane's weight) increased with newer aircraft models.  The stall speed (the slowest possible speed at which an aircraft can fly without stalling) also increased.  The massive increases in engine power alone were not sufficient to provide the increases in speed that designers wanted.  They had to deliberately sacrifice lift production in order to minimize drag.
      World War Two saw the introduction of laminar-flow wings.  These were wings that had a cross-section (or airfoil) that generated less turbulent airflow than previous airfoil designs.  However, they also generated much less lift.  Watch a B-17 (which does not have a laminar-flow wing) and a B-24 (which does) take off.  The B-24 eats up a lot more runway before its nose pulls up.

      There are many causes of aerodynamic drag, but lift on a WWII fighter aircraft can be broken down into two major categories.  There is induced drag, which is caused by wingtip vortices and is a byproduct of lift production, and parasitic drag which is everything else.  Induced drag is interesting in that it actually decreases with airspeed.  So for takeoff and landing it is a major consideration, but for cruising flight it is less important.

      However, induced drag is also significant during combat maneuvering.  Wing with a higher aspect ratio, that is, the ratio of the wingspan to the wing chord (which is the distance from the leading edge to the trailing edge of the wing) produce less induced drag.

      So, for the purposes of producing good cruise efficiency, reducing induced drag was not a major consideration.  For producing the best maneuvering fighter, reducing induced drag was significant.

      Weight is the force counteracting lift.  The more weight an aircraft has, the more lift it needs to produce.  The more lift it needs to produce, the larger the wings need to be and the more drag they create.  The more weight an aircraft has, the less it can carry.  The more weight an aircraft has, the more sluggishly it accelerates.  In general, weight is a bad thing for aircraft.  But for fighters in WWII, weight wasn't entirely a bad thing.  The more weight an aircraft has relative to its drag, the faster it can dive.  Diving away to escape enemies if a fight was not going well was a useful tactic.  The P-47, which was extremely heavy, but comparatively well streamlined, could easily out-dive the FW-190A and Bf-109G/K.

      In general though, designers tried every possible trick to reduce aircraft weight.  Early in the war, stressed-skin monocoque designs began to take over from the fabric-covered, built-up tube designs.

      The old-style construction of the Hawker Hurricane.  It's a shit plane.

      Stressed-skin construction of the Spitfire, with a much better strength to weight ratio.
      But as the war dragged on, designers tried even more creative ways to reduce weight.  This went so far as reducing the weight of the rivets holding the aircraft together, stripping the aircraft of any unnecessary paint, and even removing or downgrading some of the guns.

      An RAF Brewster Buffalo in the Pacific theater.  The British downgraded the .50 caliber machine guns to .303 weapons in order to reduce weight.
      In some cases, however, older construction techniques were used at the war's end due to materials shortages or for cost reasons.  The German TA-152, for instance, used a large amount of wooden construction with steel reinforcement in the rear fuselage and tail in order to conserve aluminum.  This was not as light or as strong as aluminum, but beggars can't be choosers.

      Extensive use of (now rotten) wood in the rear fuselage of the TA-152
      Generally speaking, aircraft get heavier with each variant.  The Bf-109C of the late 1930s weighed 1,600 kg, but the Bf-109G of the second half of WWII had ballooned to over 2,200 kg.  One notable exception was the Soviet YAK-3:

      The YAK-3, which was originally designated YAK-1M, was a demonstration of what designers could accomplish if they had the discipline to keep aircraft weight as low as possible.  Originally, it had been intended that The YAK-1 (which had somewhat mediocre performance vs. German fighters) would be improved by installing a new engine with more power.  But all of the new and more powerful engines proved to be troublesome and unreliable.  Without any immediate prospect of more engine power, the Yakovlev engineers instead improved performance by reducing weight.  The YAK-3 ended up weighing nearly 300 kg less than the YAK-1, and the difference in performance was startling.  At low altitude the YAK-3 had a tighter turn radius than anything the Luftwaffe had.  
      Thrust is the force propelling the aircraft forwards.  It is generally considered a good thing.  Thrust was one area where engineers could and did make improvements with very few other compromises.  The art of high-output piston engine design was refined during WWII to a precise science, only to be immediately rendered obsolete by the development of jet engines.
      Piston engined aircraft convert engine horsepower into thrust airflow via a propeller.  Thrust was increased during WWII primarily by making the engines more powerful, although there were also some improvements in propeller design and efficiency.  A tertiary source of thrust was the addition of jet thrust from the exhaust of the piston engines and from Merideth Effect radiators.
      The power output of WWII fighter engines was improved in two ways; first by making the engines larger, and second by making the engines more powerful relative to their weight.  Neither process was particularly straightforward or easy, but nonetheless drastic improvements were made from the war's beginning to the war's end.

      The Pratt and Whitney Twin Wasp R-1830-1 of the late 1930s could manage about 750-800 horsepower.  By mid-war, the R-1830-43 was putting out 1200 horsepower out of the same displacement.  Careful engineering, gradual improvements, and the use of fuel with a higher and more consistent octane level allowed for this kind of improvement.

      The R-1830 Twin Wasp

      However, there's no replacement for displacement.  By the beginning of 1943, Japanese aircraft were being massacred with mechanical regularity by a new US Navy fighter, the F6F Hellcat, which was powered by a brand new Pratt and Whitney engine, the R-2800 Double Wasp.

      The one true piston engine

      As you can see from the cross-section above, the R-2800 has two banks of cylinders.  This is significant to fighter performance because even though it had 53% more engine displacement than the Twin Wasp (For US engines, the numerical designation indicated engine displacement in square inches), the Double Wasp had only about 21% more frontal area.  This meant that a fighter with the R-2800 was enjoying an increase in power that was not proportionate with the increase in drag.  Early R-2800-1 models could produce 1800 horsepower, but by war's end the best models could make 2100 horsepower.  That meant a 45% increase in horsepower relative to the frontal area of the engine.  Power to weight ratios for the latest model R-1830 and R-2800 were similar, while power to displacement improved by about 14%.
      By war's end Pratt and Whitney had the monstrous R-4360 in production:

      This gigantic engine had four rows of radially-arranged pistons.  Compared to the R-2800 it produced about 50% more power for less than 10% more frontal area.  Again, power to weight and power to displacement showed more modest improvements.  The greatest gains were from increasing thrust with very little increase in drag.  All of this was very hard for the engineers, who had to figure out how to make crankshafts and reduction gear that could handle that much power without breaking, and also how to get enough cooling air through a giant stack of cylinders.

      Attempts at boosting the thrust of fighters with auxiliary power sources like rockets and ramjets were tried, but were not successful.

      Yes, that is a biplane with retractable landing gear and auxiliary ramjets under the wings.  Cocaine is a hell of a drug.

      A secondary source of improvement in thrust came from the development of better propellers.  Most of the improvement came just before WWII broke out, and by the time the war broke out, most aircraft had constant-speed propellers.

      For optimal performance, the angle of attack of the propeller blades must be matched to the ratio of the forward speed of the aircraft to the circular velocity of the propeller tips.  To cope with the changing requirements, constant speed or variable pitch propellers were invented that could adjust the angle of attack of the propeller blades relative to the hub.

      There was also improvement in using exhaust from the engine and the waste heat from the engine to increase thrust.  Fairly early on, designers learned that the enormous amount of exhaust produced by the engine could be directed backwards to generate thrust.  Exhaust stacks were designed to work as nozzles to harvest this small source of additional thrust:

      The exhaust stacks of the Merlin engine in a Spitfire act like jet nozzles

      A few aircraft also used the waste heat being rejected by the radiator to produce a small amount of additional thrust.  The Meredith Effect radiator on the P-51 is the best-known example:

      Excess heat from the engine was radiated into the moving airstream that flowed through the radiator.  The heat would expand the air, and the radiator was designed to use this expansion and turn it into acceleration.  In essence, the radiator of the P-51 worked like a very weak ramjet.  By the most optimistic projections the additional thrust from the radiator would cancel out the drag of the radiator at maximum velocity.  So, it may not have provided net thrust, but it did still provide thrust, and every bit of thrust mattered.
      For the most part, achieving specific design objectives in WWII fighters was a function of minimizing weight, maximizing lift, minimizing drag and maximizing thrust.  But doing this in a satisfactory way usually meant emphasizing certain performance goals at the expense of others.
      Top Speed, Dive Speed and Acceleration
      During the 1920s and 1930s, the lack of any serious air to air combat allowed a number of crank theories on fighter design to develop and flourish.  These included the turreted fighter:

      The heavy fighter:

      And fighters that placed far too much emphasis on turn rate at the expense of everything else:

      But it quickly became clear, from combat in the Spanish Civil War, China, and early WWII, that going fast was where it was at.  In a fight between an aircraft that was fast and an aircraft that was maneuverable, the maneuverable aircraft could twist and pirouette in order to force the situation to their advantage, while the fast aircraft could just GTFO the second that the situation started to sour.  In fact, this situation would prevail until the early jet age when the massive increase in drag from supersonic flight made going faster difficult, and the development of heat-seeking missiles made it dangerous to run from a fight with jet nozzles pointed towards the enemy.
      The top speed of an aircraft is the speed at which drag and thrust balance each other out, and the aircraft stops accelerating.  Maximizing top speed means minimizing drag and maximizing thrust.  The heavy fighters had a major, inherent disadvantage in terms of top speed.  This is because twin engined prop fighters have three big lumps contributing to frontal area; two engines and the fuselage.  A single engine fighter only has the engine, with the rest of the fuselage tucked neatly behind it.  The turret fighter isn't as bad; the turret contributes some additional drag, but not as much as the twin-engine design does.  It does, however, add quite a bit of weight, which cripples acceleration even if it has a smaller effect on top speed.  Early-war Japanese and Italian fighters were designed with dogfight  performance above all other considerations, which meant that they had large wings to generate large turning forces, and often had open cockpits for the best possible visibility.  Both of these features added drag, and left these aircraft too slow to compete against aircraft that sacrificed some maneuverability for pure speed.

      Drag force rises roughly as a square function of airspeed (throw this formula out the window when you reach speeds near the speed of sound).  Power is equal to force times distance over time, or force times velocity.  So, power consumed by drag will be equal to drag coefficient times frontal area times airspeed squared times airspeed.  So, the power required for a given maximum airspeed will be a roughly cubic function.  And that is assuming that the efficiency of the propeller remains constant!
      Acceleration is (thrust-drag)/weight.  It is possible to have an aircraft that has a high maximum speed, but quite poor acceleration and vice versa.  Indeed, the A6M5 zero had a somewhat better power to weight ratio than the F6F5 Hellcat, but a considerably lower top speed.  In a drag race the A6M5 would initially pull ahead, but it would be gradually overtaken by the Hellcat, which would eventually get to speeds that the zero simply could not match.

      Maximum dive speed is also a function of drag and thrust, but it's a bit different because the weight of the aircraft times the sine of the dive angle also counts towards thrust.  In general this meant that large fighters dove better.  Drag scales with the frontal area, which is a square function of size.  Weight scales with volume (assuming constant density), which is a cubic function of size.  Big American fighters like the P-47 and F4U dove much faster than their Axis opponents, and could pick up speed that their opponents could not hope to match in a dive.

      A number of US fighters dove so quickly that they had problems with localized supersonic airflow.  Supersonic airflow was very poorly understood at the time, and many pilots died before somewhat improvisational solutions like dive brakes were added.

      Ranking US ace Richard Bong takes a look at the dive brakes of a P-38

      Acceleration, top speed and dive speed are all improved by reducing drag, so every conceivable trick for reducing parasitic drag was tried.

      The Lockheed P-38 used flush rivets on most surfaces as well as extensive butt welds to produce the smoothest possible flight surfaces.  This did reduce drag, but it also contributed to the great cost of the P-38.

      The Bf 109 was experimentally flown with a V-tail to reduce drag.  V-tails have lower interference drag than conventional tails, but the modification was found to compromise handling during takeoff and landing too much and was not deemed worth the small amount of additional speed.

      The YAK-3 was coated with a layer of hard wax to smooth out the wooden surface and reduce drag.  This simple improvement actually increased top speed by a small, but measurable amount!  In addition, the largely wooden structure of the aircraft had few rivets, which meant even less drag.

      The Donier DO-335 was a novel approach to solving the problem of drag in twin-engine fighters.  The two engines were placed at the front and rear of the aircraft, driving a pusher and a tractor propeller.  This unconventional configuration led to some interesting problems, and the war ended before these could be solved.

      The J2M Raiden had a long engine cowling that extended several feet forward in front of the engine.  This tapered engine cowling housed an engine-driven fan for cooling air as well as a long extension shaft of the engine to drive the propeller.  This did reduce drag, but at the expense of lengthening the nose and so reducing pilot visibility, and also moving the center of gravity rearward relative to the center of lift.
      Designers were already stuffing the most powerful engines coming out of factories into aircraft, provided that they were reasonably reliable (and sometimes not even then).  After that, the most expedient solution to improve speed was to sacrifice lift to reduce drag and make the wings smaller.  The reduction in agility at low speeds was generally worth it, and at higher speeds relatively small wings could produce satisfactory maneuverability since lift is a square function of velocity.  Alternatively, so-called laminar flow airfoils (they weren't actually laminar flow) were substituted, which produced less drag but also less lift.  

      The Bell P-63 had very similar aerodynamics to the P-39 and nearly the same engine, but was some 80 KPH faster thanks to the new laminar flow airfoils.  However, the landing speed also increased by about 40 KPH, largely sacrificing the benevolent landing characteristics that P-39 pilots loved.

      The biggest problem with reducing the lift of the wings to increase speed was that it made takeoff and landing difficult.  Aircraft with less lift need to get to higher speeds to generate enough lift to take off, and need to land at higher speeds as well.  As the war progressed, fighter aircraft generally became trickier to fly, and the butcher's bill of pilots lost in accidents and training was enormous.
      Turn Rate
      Sometimes things didn't go as planned.  A fighter might be ambushed, or an ambush could go wrong, and the fighter would need to turn, turn, turn.  It might need to turn to get into a position to attack, or it might need to turn to evade an attack.

      Aircraft in combat turn with their wings, not their rudders.  This is because the wings are way, way bigger, and therefore much more effective at turning the aircraft.  The rudder is just there to make the nose do what the pilot wants it to.  The pilot rolls the aircraft until it's oriented correctly, and then begins the turn by pulling the nose up.  Pulling the nose up increases the angle of attack, which increases the lift produced by the wings.  This produces centripetal force which pulls the plane into the turn.  Since WWII aircraft don't have the benefit of computer-run fly-by-wire flight control systems, the pilot would also make small corrections with rudder and ailerons during the turn.

      But, as we saw above, making more lift means making more drag.  Therefore, when aircraft turn they tend to slow down unless the pilot guns the throttle.  Long after WWII, Col. John Boyd (PBUH) codified the relationship between drag, thrust, lift and weight as it relates to aircraft turning performance into an elegant mathematical model called energy-maneuverability theory, which also allowed for charts that depict these relationships.

      Normally, I would gush about how wonderful E-M theory is, but as it turns out there's an actual aerospace engineer named John Golan who has already written a much better explanation than I would likely manage, so I'll just link that.  And steal his diagram:

      E-M charts are often called "doghouse plots" because of the shape they trace out.  An E-M chart specifies the turning maneuverability of a given aircraft with a given amount of fuel and weapons at a particular altitude.  Turn rate is on the Y axis and airspeed is on the X axis.  The aircraft is capable of flying in any condition within the dotted line, although not necessarily continuously.  The aircraft is capable of flying continuously anywhere within the dotted line and under the solid line until it runs out of fuel.

      The aircraft cannot fly to the left of the doghouse because it cannot produce enough lift at such a slow speed to stay in the air.  Eventually it will run out of sky and hit the ground.  The curved, right-side "roof" of the doghouse is actually a continuous quadratic curve that represents centrifugal force.  The aircraft cannot fly outside of this curve or it or the pilot will break from G forces.  Finally, the rightmost, vertical side of the doghouse is the maximum speed that the aircraft can fly at; either it doesn't have the thrust to fly faster, or something breaks if the pilot should try.  The peak of the "roof" of the doghouse represents the aircraft's ideal airspeed for maximum turn rate.  This is usually called the "corner velocity" of the aircraft.

      So, let's look at some actual (ish) EM charts:


      Now, these are taken from a flight simulator, but they're accurate enough to illustrate the point.  They're also a little busier than the example above, but still easy enough to understand.  The gray plot overlaid on the chart consists of G-force (the curves) and turn radius (the straight lines radiating from the graph origin).  The green doghouse shows the aircraft's performance with flaps.  The red curve shows the maximum sustained turn rate.  You may notice that the red line terminates on the X axis at a surprisingly low top speed; that's because these charts were made for a very low altitude confrontation, and their maximum level top could only be achieved at higher altitudes.  These aircraft could fly faster than the limits of the red line show, but only if they picked up extra speed from a dive.  These charts could also be overlaid on each other for comparison, but in this case that would be like a graphic designer vomiting all over the screen, or a Studio Killers music video.

      From these charts, we can conclude that at low altitude the P-51D enjoys many advantages over the Bf 109G-6.  It has a higher top speed at this altitude, 350-something vs 320-something MPH.  However, the P-51 has a lower corner speed.  In general, the P-51's flight envelope at this altitude is just bigger.  But that doesn't mean that the Bf 109 doesn't have a few tricks.  As you can see, it enjoys a better sustained turn rate from about 175 to 325 MPH.  Between those speed bands, the 109 will be able to hold on to its energy better than the pony provided it uses only moderate turns.

      During turning flight, our old problem induced drag comes back to haunt fighter designers.  The induced drag equation is Cdi = (Cl^2) / (pi * AR * e).  Where Cdi is the induced drag coefficient, Cl is the lift coefficient, pi is the irrational constant pi, AR is aspect ratio, or wingspan squared divided by wing area, and e is not the irrational constant e but an efficiency factor.

      There are a few things of interest here.  For starters, induced drag increases with the square of the lift coefficient.  Lift coefficient increases more or less linearly (see above) with angle of attack.  There are various tricks for increasing wing lift nonlinearly, as well as various tricks for generating lift with surfaces other than the wings, but in WWII, designers really didn't use these much.  So, for all intents and purposes, the induced drag coefficient will increase with the square of angle of attack, and for a given airspeed, induced drag will increase with the square of the number of Gs the aircraft is pulling.  Since this is a square function, it can outrun other, linear functions easily, so minimizing the effect of induced drag is a major consideration in improving the sustained turn performance of a fighter.

      To maximize turn rate in a fighter, designers needed to make the fighter as light as possible, make the engine as powerful as possible, make the wings have as much area as possible, make the wings as long and skinny as possible, and to use the most efficient possible wing shape.

      You probably noticed that two of these requirements, make the plane as light as possible and make the wings as large as possible, directly contradict the requirements of good dive performance.  There is simply no way to reconcile them; the designers either needed to choose one, the other, or come to an intermediate compromise.  There was no way to have both great turning performance and great diving performance.

      Since the designers could generally be assumed to have reduced weight to the maximum possible extent and put the most powerful engine available into the aircraft, that left the design of the wings.

      The larger the wings, the more lift they generate at a given angle of attack.  The lower the angle of attack, the less induced drag.  The bigger wings would add more drag in level flight and reduce top speed, but they would actually reduce drag during maneuvering flight and improve sustained turn rate.  A rough estimate of the turning performance of the aircraft can be made by dividing the weight of the aircraft over its wing area.  This is called wing loading, and people who ought to know better put far too much emphasis on it.  If you have E-M charts, you don't need wing loading.  However, E-M charts require quite a bit of aerodynamic data to calculate, while wing loading is much simpler.
      Giving the wings a higher aspect ratio would also improve turn performance, but the designers hands were somewhat tied in this respect.  The wings usually stored the landing gear and often the armament of the fighter.  In addition the wings generated the lift, and making the wings too long and skinny would make them too structurally flimsy to support the aircraft in maneuvering flight.  That is, unless they were extensively reinforced, which would add weight and completely defeat the purpose.  So, designers were practically limited in how much they could vary the aspect ratio of fighter wings.

      The wing planform has significant effect on the efficiency factor e.  The ideal shape to reduce induced drag is the "elliptical" (actually two half ellipses) wing shape used on the Supermarine spitfire.

      This wing shape was, however, difficult to manufacture.  By the end of the war, engineers had come up with several wing planforms that were nearly as efficient as the elliptical wing, but were much easier to manufacture.

      Another way to reduce induced drag is to slightly twist the wings of the aircraft so that the wing tips point down.

      This is called washout.  The main purpose of washout was to improve the responsiveness of the ailerons during hard maneuvering, but it could give small efficiency improvements as well.  Washout obviously complicates the manufacture of the wing, and thus it wasn't that common in WWII, although the TA-152 notably did have three degrees of tip washout.

      The Bf 109 had leading edge slats that would deploy automatically at high angles of attack.  Again, the main intent of these devices was to improve the control of the aircraft during takeoff and landing and hard maneuvering, but they did slightly improve the maximum angle of attack the wing could be flown at, and therefore the maximum instantaneous turn rate of the aircraft.  The downside of the slats was that they weakened the wing structure and precluded the placement of guns inside the wing.

      leading edge slats of a Bf 109 in the extended position

      One way to attempt to reconcile the conflicting requirements of high speed and good turning capability was the "butterfly" flaps seen on Japanese Nakajima fighters.

      This model of a Ki-43 shows the location of the butterfly flaps; on the underside of the wings, near the roots

      These flaps would extend during combat, in the case of later Nakajima fighters, automatically, to increase wing area and lift.  During level and high speed flight they would retract to reduce drag.  Again, this would mainly improve handling on the left hand side of the doghouse, and would improve instantaneous turn rate but do very little for sustained turn rate.
      In general, turn performance was sacrificed in WWII for more speed, as the two were difficult to reconcile.  There were a small number of tricks known to engineers at the time that could improve instantaneous turn rate on fast aircraft with high wing loading, but these tricks were inadequate to the task of designing an aircraft that was very fast and also very maneuverable.  Designers tended to settle for as fast as possible while still possessing decent turning performance.
      Climb Rate
      Climb rate was most important for interceptor aircraft tasked with quickly getting to the level of intruding enemy aircraft.  When an aircraft climbs it gains potential energy, which means it needs spare available power.  The specific excess power of an aircraft is equal to V/W(T-D) where V is airspeed, W is weight, T is thrust and D is drag.  Note that lift isn't anywhere in this equation!  Provided that the plane has adequate lift to stay in the air and its wings are reasonably efficient at generating lift so that the D term doesn't get too high, a plane with stubby wings can be quite the climber!

      The Mitsubishi J2M Raiden is an excellent example of what a fighter optimized for climb rate looked like.

      A captured J2M in the US during testing

      The J2M had a very aerodynamically clean design, somewhat at the expense of pilot visibility and decidedly at the expense of turn rate.  The airframe was comparatively light, somewhat at the expense of firepower and at great expense to fuel capacity.  Surprisingly for a Japanese aircraft, there was some pilot armor.  The engine was, naturally, the most powerful available at the time.  The wings, in addition to being somewhat small by Japanese standards, had laminar-flow airfoils that sacrificed maximum lift for lower drag.

      The end result was an aircraft that was the polar opposite of the comparatively slow, long-ranged and agile A6M zero-sen fighters that IJN pilots were used to!  But it certainly worked.  The J2M was one of the fastest-climbing piston engine aircraft of the war, comparable to the F8F Bearcat.

      The design requirements for climb rate were practically the same as the design requirements for acceleration, and could generally be reconciled with the design requirements for dive performance and top speed.  The design requirements for turn rate were very difficult to reconcile with the design requirements for climb rate.
      Roll Rate
      In maneuvering combat aircraft roll to the desired orientation and then pitch.  The ability to roll quickly allows the fighter to transition between turns faster, giving it an edge in maneuvering combat.

      Aircraft roll with their ailerons by making one wing generate more lift while the other wing generates less lift.

      The physics from there are the same for any other rotating object.  Rolling acceleration is a function of the amount of torque that the ailerons can provide divided by the moment of inertia of the aircraft about the roll axis.  So, to improve roll rate, a fighter needs the lowest possible moment of inertia and the highest possible torque from its ailerons.

      The FW-190A was the fighter best optimized for roll rate.  Kurt Tank's design team did everything right when it came to maximizing roll rate.

      The FW-190 could out-roll nearly every other piston fighter

      The FW-190 has the majority of its mass near the center of the aircraft.  The fuel is all stored in the fuselage and the guns are located either above the engine or in the roots of the wings.  Later versions added more guns, but these were placed just outside of the propeller arc.

      Twin engined fighters suffered badly in roll rate in part because the engines had to be placed far from the centerline of the aircraft.  Fighters with armament far out in the wings also suffered.

      The ailerons were very large relative to the size of the wing.  This meant that they could generate a lot of torque.  Normally, large ailerons were a problem for pilots to deflect.  Most World War Two fighters did not have any hydraulic assistance; controls needed to be deflected with muscle power alone, and large controls could encounter too much wind resistance for the pilots to muscle through at high speed.

      The FW-190 overcame this in two ways.  The first was that, compared to the Bf 109, the cockpit was decently roomy.  Not as roomy as a P-47, of course, but still a vast improvement.  Cockpit space in World War Two fighters wasn't just a matter of comfort.  The pilots needed elbow room in the cockpit in order to wrestle with the control stick.  The FW-190 also used controls that were actuated by solid rods rather than by cables.  This meant that there was less give in the system, since cables aren't completely rigid.

      Additionally, the FW-190 used Frise ailerons, which have a protruding tip that bites into the wind and reduces the necessary control forces:

      Several US Navy fighters, like later models of F6F and F4U used spring-loaded aileron tabs, which accomplished something similar by different means:

      In these designs a spring would assist in pulling the aileron one way, and a small tab on the aileron the opposite way in order to aerodynamically move the aileron.  This helped reduce the force necessary to move the ailerons at high speeds.

      Another, somewhat less obvious requirement for good roll rate in fighters was that the wings be as rigid as possible.  At high speeds, the force of the ailerons deflecting would tend to twist the wings of the aircraft in the opposite direction.  Essentially, the ailerons began to act like servo tabs.  This meant that the roll rate would begin to suffer at high speeds, and at very high speeds the aircraft might actually roll in the opposite direction of the pilot's input.

      The FW-190s wings were extremely rigid.  Wing rigidity is a function of aspect ratio and construction.

      The FW-190 had wings that had a fairly low aspect ratio, and were somewhat overbuilt.  Additionally, the wings were built as a single piece, which was a very strong and robust approach.  This had the downside that damaged wings had to be replaced as a unit, however.
      Some spitfires were modified by changing the wings from the original elliptical shape to a "clipped" planform that ended abruptly at a somewhat shorter span.  This sacrificed some turning performance, but it made the wings much stiffer and therefore improved roll rate.

      Finally, most aircraft at the beginning of the war had fabric-skinned ailerons, including many that had metal-skinned wings.  Fabric-skinned ailerons were cheaper and less prone to vibration problems than metal ones, but at high speed the shellacked surface of the fabric just wasn't air-tight enough, and a significant amount of airflow would begin going into and through the aileron.  This degraded their effectiveness greatly, and the substitution of metal surfaces helped greatly.
      Stability and Safety
      World War Two fighters were a handful.  The pressures of war meant that planes were often rushed into service without thorough testing, and there were often nasty surprises lurking in unexplored corners of the flight envelope.

      This is the P-51H.  Even though the P-51D had been in mass production for years, it still had some lingering stability issues.  The P-51H solved these by enlarging the tail.  Performance was improved by a comprehensive program of drag reduction and weight reduction through the use of thinner aluminum skin.

      The Bf 109 had a poor safety record in large part because of the narrow landing gear.  This design kept the mass well centralized, but it made landing far too difficult for inexpert pilots.

      The ammunition for the massive 37mm cannon in the P-39 and P-63 was located in the nose, and located far forward enough that depleting the ammunition significantly affected the aircraft's stability.  Once the ammunition was expended, it was much more likely that the aircraft could enter dangerous spins.

      The cockpit of the FW-190, while roomier than the Bf 109, had terrible forward visibility.  The pilot could see to the sides and rear well enough, but a combination of a relatively wide radial engine and a hump on top of the engine cowling to house the synchronized machine guns meant that the pilot could see very little.  This could be dangerous while taxiing on the ground.

  • Create New...