At the end of January, 2018 and after many false starts, the Russian military formally announced the limited adoption of the AEK-971 and AEK-973 rifles. These rifles feature an unusual counterbalanced breech mechanism which is intended to improve handling, especially during full auto fire. While exotic outside of Russia, these counter-balanced rifles are not at all new. In fact, the 2018 adoption of the AEK-971 represents the first success of a rifle concept that has been around for a some time.
Animated diagram of the AK-107/108
Balanced action recoil systems (BARS) work by accelerating a mass in the opposite direction of the bolt carrier. The countermass is of similar mass to the bolt carrier and synchronized to move in the opposite direction by a rack and pinion. This cancels out some, but not all of the impulses associated with self-loading actions. But more on that later.
Long before Soviet small arms engineers began experimenting with BARS, a number of production weapons featured synchronized masses moving in opposite directions. Generally speaking, any stabilization that these actions provided was an incidental benefit. Rather, these designs were either attempts to get around patents, or very early developments in the history of autoloading weapons when the design best practices had not been standardized yet. These designs featured a forward-moving gas trap that, of necessity, needed its motion converted into rearward motion by either a lever or rack and pinion.
The French St. Etienne Machine Gun
The Danish Bang rifle
At around the same time, inventors started toying with the idea of using synchronized counter-masses deliberately to cancel out recoil impulses. The earliest patent for such a design comes from 1908 from obscure firearms designer Ludwig Mertens:
More information on these early developments is in this article on the matter by Max Popenker.
Soviet designers began investigating the BARS concept in earnest in the early 1970s. This is worth noting; these early BARS rifles were actually trialed against the AK-74.
The AL-7 rifle, a BARS rifle from the early 1970s
The Soviet military chose the more mechanically orthodox AK-74 as a stopgap measure in order to get a small-caliber, high-velocity rifle to the front lines as quickly as possible. Of course, the thing about stopgap weapons is that they always end up hanging around longer than intended, and forty four years later Russian troops are still equipped with the AK-74.
A small number of submachine gun prototypes with a BARS-like system were trialed, but not mass-produced. The gas operated action of a rifle can be balanced with a fairly small synchronizer rack and pinion, but the blowback action of a submachine gun requires a fairly large and massive synchronizer gear or lever. This is because in a gas operated rifle a second gas piston can be attached to the countermass, thereby unloading the synchronizer gear.
There are three BARS designs of note from Russia:
The AK-107 and AK-108 are BARS rifles in 5.45x39mm and 5.56x45mm respectively. These rifles are products of the Kalashnikov design bureau and Izmash factory, now Kalashnikov Concern. Internally they are very similar to an AK, only with the countermass and synchronizer unit situated above the bolt carrier group.
Close up of synchronizer and dual return spring assemblies
This is configuration is almost identical to the AL-7 design of the early 1970s. Like the more conventional AK-100 series, the AK-107/AK-108 were offered for export during the late 1990s and early 2000s, but they failed to attract any customers. The furniture is very similar to the AK-100 series, and indeed the only obvious external difference is the long tube protruding from the gas block and bridging the gap to the front sight.
The AK-107 has re-emerged recently as the Saiga 107, a rifle clearly intended for competitive shooting events like 3-gun.
The rival Kovrov design bureau was only slightly behind the Kalashnikov design bureau in exploring the BARS concept. Their earliest prototype featuring the system, the SA-006 (also transliterated as CA-006) also dates from the early 1970s.
Chief designer Sergey Koksharov refined this design into the AEK-971. The chief refinement of his design over the first-generation balanced action prototypes from the early 1970s is that the countermass sits inside the bolt carrier, rather than being stacked on top of it. This is a more compact installation of the mechanism, but otherwise accomplishes the same thing.
Moving parts group of the AEK-971
The early AEK-971 had a triangular metal buttstock and a Kalashnikov-style safety lever on the right side of the rifle.
In this guise the rifle competed unsuccessfully with Nikonov's AN-94 design in the Abakan competition. Considering that a relative handful of AN-94s were ever produced, this was perhaps not a terrible loss for the Kovrov design bureau.
After the end of the Soviet Union, the AEK-971 design was picked up by the Degtyarev factory, itself a division of the state-owned Rostec.
The Degtyarev factory would unsuccessfully try to make sales of the weapon for the next twenty four years. In the meantime, they made some small refinements to the rifle. The Kalashnikov-style safety lever was deleted and replaced with a thumb safety on the left side of the receiver.
Later on the Degtyarev factory caught HK fever, and a very HK-esque sliding metal stock was added in addition to a very HK-esque rear sight. The thumb safety lever was also made ambidextrous. The handguard was changed a few times.
Still, reception to the rifle was lukewarm. The 2018 announcement that the rifle would be procured in limited numbers alongside more conventional AK rifles is not exactly a coup. The numbers bought are likely to be very low. A 5.56mm AEK-972 and 7.62x39mm AEK-973 also exist. The newest version of the rifle has been referred to as A-545.
AKB and AKB-1
AKB, closeup of the receiver
The AKB and AKB-1 are a pair of painfully obscure designs designed by Viktor Kalashnikov, Mikhail Kalashnikov's son. The later AKB-1 is the more conservative of the two, while the AKB is quite wild.
Both rifles use a more or less conventional AK type bolt carrier, but the AKB uses the barrel as the countermass. That's right; the entire barrel shoots forward while the bolt carrier moves back! This unusual arrangement also allowed for an extremely high cyclic rate of fire; 2000RPM. Later on a burst limiter and rate of fire limiter were added. The rifle would fire at the full 2000 RPM for two round bursts, but a mere 1000 RPM for full auto.
The AKB-1 was a far more conventional design, but it still had a BARS. In this design the countermass was nested inside the main bolt carrier, similar to the AEK-971.
Not a great deal of information is available about these rifles, but @Hrachya H wrote an article on them which can be read here.
Tank design is often conceptualized as a balance between mobility, protection and firepower. This is, at best, a messy and imprecise conceptualization. It is messy because these three traits cannot be completely separated from each other. An APC, for example, that provides basic protection against small arms fire and shell fragments is effectively more mobile than an open-topped vehicle because the APC can traverse areas swept by artillery fires that are closed off entirely to the open-topped vehicle. It is an imprecise conceptualization because broad ideas like "mobility" are very complex in practice. The M1 Abrams burns more fuel than the Leo 2, but the Leo 2 requires diesel fuel, while the omnivorous AGT-1500 will run happily on anything liquid and flammable. Which has better strategic mobility? Soviet rail gauge was slightly wider than Western European standard; 3.32 vs 3.15 meters. But Soviet tanks in the Cold War were generally kept lighter and smaller, and had to be in order to be moved in large numbers on a rail and road network that was not as robust as that further west. So if NATO and the Warsaw Pact had switched tanks in the late 1950s, they would both have downgraded the strategic mobility of their forces, as the Soviet tanks would be slightly too wide for unrestricted movement on rails in the free world, and the NATO tanks would have demanded more logistical support per tank than evil atheist commie formations were designed to provide.
So instead of wading into a deep and subtle subject, I am going to write about something that is extremely simple and easy to describe in mathematical terms; the top speed of a tank moving in a straight line. Because it is so simple and straightforward to understand, it is also nearly meaningless in terms of the combat performance of a tank.
In short, the top speed of a tank is limited by three things; the gear ratio limit, the power limit and the suspension limit. The tank's maximum speed will be whichever of these limits is the lowest on a given terrain. The top speed of a tank is of limited significance, even from a tactical perspective, because the tank's ability to exploit its top speed is constrained by other factors. A high top speed, however, looks great on sales brochures, and there are examples of tanks that were designed with pointlessly high top speeds in order to overawe people who needed impressing.
When this baby hits 88 miles per hour, you're going to see some serious shit.
The Gear Ratio Limit
Every engine has a maximum speed at which it can turn. Often, the engine is artificially governed to a maximum speed slightly less than what it is mechanically capable of in order to reduce wear. Additionally, most piston engines develop their maximum power at slightly less than their maximum speed due to valve timing issues:
A typical power/speed relationship for an Otto Cycle engine. Otto Cycle engines are primitive devices that are only used when the Brayton Cycle Master Race is unavailable.
Most tanks have predominantly or purely mechanical drivetrains, which exchange rotational speed for torque by easily measurable ratios. The maximum rotational speed of the engine, multiplied by the gear ratio of the highest gear in the transmission multiplied by the gear ratio of the final drives multiplied by the circumference of the drive sprocket will equal the gear ratio limit of the tank. The tank is unable to achieve higher speeds than the gear ratio limit because it physically cannot spin its tracks around any faster.
Most spec sheets don't actually give out the transmission ratios in different gears, but such excessively detailed specification sheets are provided in Germany's Tiger Tanks by Hilary Doyle and Thomas Jentz. The gear ratios, final drive ratios, and maximum engine RPM of the Tiger II are all provided, along with a handy table of the vehicle's maximum speed in each gear. In eighth gear, the top speed is given as 41.5 KPH, but that is at an engine speed of 3000 RPM, and in reality the German tank engines were governed to less than that in order to conserve their service life. At a more realistic 2500 RPM, the mighty Tiger II would have managed 34.6 KPH.
In principle there are analogous limits for electrical and hydraulic drive components based on free speeds and stall torques, but they are a little more complicated to actually calculate.
Part of the transmission from an M4 Sherman, picture from Jeeps_Guns_Tanks' great Sherman website
The Power Limit
So a Tiger II could totally go 34.6 KPH in combat, right? Well, perhaps. And by "perhaps," I mean "lolololololol, fuck no." I defy you to find me a test report where anybody manages to get a Tiger II over 33 KPH. While the meticulous engineers of Henschel did accurately transcribe the gear ratios of the transmission and final drive accurately, and did manage to use their tape measures correctly when measuring the drive sprockets, their rosy projections of the top speed did not account for the power limit.
As a tank moves, power from the engine is wasted in various ways and so is unavailable to accelerate the tank. As the tank goes faster and faster, the magnitude of these power-wasting phenomena grows, until there is no surplus power to accelerate the tank any more. The system reaches equilibrium, and the tank maxes out at some top speed where it hits its power limit (unless, of course, the tank hits its gear ratio limit first).
The actual power available to a tank is not the same as the gross power of the motor. Some of the gross horsepower of the motor has to be directed to fans to cool the engine (except, of course, in the case of the Brayton Cycle Master Race, whose engines are almost completely self-cooling). The transmission and final drives are not perfectly efficient either, and waste a significant amount of the power flowing through them as heat. As a result of this, the actual power available at the sprocket is typically between 61% and 74% of the engine's quoted gross power.
Once the power does hit the drive sprocket, it is wasted in overcoming the friction of the tank's tracks, in churning up the ground the tank is on, and in aerodynamic drag. I have helpfully listed these in the order of decreasing importance.
The drag coefficient of a cube (which is a sufficiently accurate physical representation of a Tiger II) is .8. This, multiplied by half the fluid density of air (1.2 kg/m^3) times the velocity (9.4 m/s) squared times a rough frontal area of 3.8 by 3 meters gives a force of 483 newtons of drag. This multiplied by the velocity of the tiger II gives 4.5 kilowatts, or about six horsepower lost to drag. With the governor installed, the HL 230 could put out about 580 horsepower, which would be four hundred something horses at the sprocket, so the aerodynamic drag would be 1.5% of the total available power. Negligible. Tanks are just too slow to lose much power to aerodynamic effects.
Losses to the soil can be important, depending on the surface the tank is operating on. On a nice, hard surface like a paved road there will be minimal losses between the tank's tracks and the surface. Off-road, however, the tank's tracks will start to sink into soil or mud, and more power will be wasted in churning up the soil. If the soil is loose or boggy enough, the tank will simply sink in and be immobilized. Tanks that spread their weight out over a larger area will lose less power, and be able to traverse soft soils at higher speed. This paper from the UK shows the relationship between mean maximum pressure (MMP), and the increase in rolling resistance on various soils and sands in excruciating detail. In general, tanks with more track area, with more and bigger road wheels, and with longer track pitch will have lower MMP, and will sink into soft soils less and therefore lose less top speed.
The largest loss of power usually comes from friction within the tracks themselves. This is sometimes called rolling resistance, but this term is also used to mean other, subtly different things, so it pays to be precise. Compared to wheeled vehicles, tracked vehicles have extremely high rolling resistance, and lose a lot of power just keeping the tracks turning. Rolling resistance is generally expressed as a dimensionless coefficient, CR, which multiplied against vehicle weight gives the force of friction. This chart from R.M. Ogorkiewicz' Technology of Tanks shows experimentally determined rolling resistance coefficients for various tracked vehicles:
The rolling resistance coefficients given here show that a tracked vehicle going on ideal testing ground conditions is about as efficient as a car driving over loose gravel. It also shows that the rolling resistance increases with vehicle speed. A rough approximation of this increase in CR is given by the equation CR=A+BV, where A and B are constants and V is vehicle speed. Ogorkiewicz explains:
It should be noted that the lubricated needle bearing track joints of which he speaks were only ever used by the Germans in WWII because they were insanely complicated. Band tracks have lower rolling resistance than metal link tracks, but they really aren't practical for vehicles much above thirty tonnes. Other ways of reducing rolling resistance include using larger road wheels, omitting return rollers, and reducing track tension. Obviously, there are practical limits to these approaches.
To calculate power losses due to rolling resistance, multiply vehicle weight by CR by vehicle velocity to get power lost. The velocity at which the power lost to rolling resistance equals the power available at the sprocket is the power limit on the speed of the tank.
The Suspension Limit
The suspension limit on speed is starting to get dangerously far away from the world of spherical, frictionless horses where everything is easy to calculate using simple algebra, so I will be brief. In addition to the continents of the world not being completely comprised of paved surfaces that minimize rolling resistance, the continents of the world are also not perfectly flat. This means that in order to travel at high speed off road, tanks require some sort of suspension or else they would shake their crews into jelly. If the crew is being shaken too much to operate effectively, then it doesn't really matter if a tank has a high enough gear ratio limit or power limit to go faster. This is also particularly obnoxious because suspension performance is difficult to quantify, as it involves resonance frequencies, damping coefficients, and a bunch of other complicated shit.
Suffice it to say, then, that a very rough estimate of the ride-smoothing qualities of a tank's suspension can be made from the total travel of its road wheels:
This chart from Technology of Tanks is helpful. A more detailed discussion of the subject of tank suspension can be found here.
The Real World Rudely Intrudes
So, how useful is high top speed in a tank in messy, hard-to-mathematically-express reality? The answer might surprise you!
A Wehrmacht M.A.N. combustotron Ausf G
We'll take some whacks at everyone's favorite whipping boy; the Panther.
A US report on a captured Panther Ausf G gives its top speed on roads as an absolutely blistering 60 KPH on roads. The Soviets could only get their captured Ausf D to do 50 KPH, but compared to a Sherman, which is generally only credited with 40 KPH on roads, that's alarmingly fast.
So, would this mean that the Panther enjoyed a mobility advantage over the Sherman? Would this mean that it was better able to make quick advances and daring flanking maneuvers during a battle?
In field tests the British found the panther to have lower off-road speed than a Churchill VII (the panther had a slightly busted transmission though). In the same American report that credits the Panther Ausf G with a 60 KPH top speed on roads, it was found that off road the panther was almost exactly as fast as an M4A376W, with individual Shermans slightly outpacing the big cat or lagging behind it slightly. Another US report from January 1945 states that over courses with many turns and curves, the Sherman would pull out ahead because the Sherman lost less speed negotiating corners. Clearly, the Panther's advantage in straight line speed did not translate into better mobility in any combat scenario that did not involve drag racing.
So what was going on with the Panther? How could it leave everything but light tanks in the dust on a straight highway, but be outpaced by the ponderous Churchill heavy tank in actual field tests?
Panther Ausf A tanks captured by the Soviets
A British report from 1946 on the Panther's transmission explains what's going on. The Panther's transmission had seven forward gears, but off-road it really couldn't make it out of fifth. In other words, the Panther had an extremely high gear ratio limit that allowed it exceptional speed on roads. However, the Panther's mediocre power to weight ratio (nominally 13 hp/ton for the RPM limited HL 230) meant that once the tank was off road and fighting mud, it only had a mediocre power limit. Indeed, it is a testament to the efficiency of the Panther's running gear that it could keep up with Shermans at all, since the Panther's power to weight ratio was about 20% lower than that particular variant of Sherman.
There were other factors limiting the Panther's speed in practical circumstances. The geared steering system used in the Panther had different steering radii based on what gear the Panther was in. The higher the gear, the wider the turn. In theory this was excellent, but in practice the designers chose too wide a turn radius for each gear, which meant that for any but the gentlest turns the Panther's drive would need to slow down and downshift in order to complete the turn, thus sacrificing any speed advantage his tank enjoyed.
So why would a tank be designed in such a strange fashion? The British thought that the Panther was originally designed to be much lighter, and that the transmission had never been re-designed in order to compensate. Given the weight gain that the Panther experienced early in development, this explanation seems like it may be partially true. However, when interrogated, Ernst Kniepkamp, a senior engineer in Germany's wartime tank development bureaucracy, stated that the additional gears were there simply to give the Panther a high speed on roads, because it looked good to senior generals.
So, this is the danger in evaluating tanks based on extremely simplistic performance metrics that look good on paper. They may be simple to digest and simple to calculate, but in the messy real world, they may mean simply nothing.
But if you try sometimes...
Fighter aircraft became much better during the Second World War. But, apart from the development of engines, it was not a straightforward matter of monotonous improvement. Aircraft are a series of compromises. Improving one aspect of performance almost always compromises others. So, for aircraft designers in World War Two, the question was not so much "what will we do to make this aircraft better?" but "what are we willing to sacrifice?"
To explain why, let's look at the forces acting on an aircraft:
Lift is the force that keeps the aircraft from becoming one with the Earth. It is generally considered a good thing.
The lift equation is L=0.5CLRV2A where L is lift, CL is lift coefficient (which is a measure of the effectiveness of the wing based on its shape and other factors), R is air density, V is airspeed and A is the area of the wing.
Airspeed is very important to an aircraft's ability to make lift, since the force of lift grows with the square of airspeed and in linear relation to all other factors. This means that aircraft will have trouble producing adequate lift during takeoff and landing, since that's when they slow down the most.
Altitude is also a significant factor to an aircraft's ability to make lift. The density of air decreases at an approximately linear rate with altitude above sea level:
Finally, wings work better the bigger they are. Wing area directly relates to lift production, provided that wing shape is kept constant.
While coefficient of lift CL contains many complex factors, one important and relatively simple factor is the angle of attack, also called AOA or alpha. The more tilted an airfoil is relative to the airflow, the more lift it will generate. The lift coefficient (and thus lift force, all other factors remaining equal) increases more or less linearly until the airfoil stalls:
Essentially what's going on is that the greater the AOA, the more the wing "bends" the air around the wing. But the airflow can only become so bent before it detaches. Once the wing is stalled it doesn't stop producing lift entirely, but it does create substantially less lift than it was just before it stalled.
Drag is the force acting against the movement of any object travelling through a fluid. Since it slows aircraft down and makes them waste fuel in overcoming it, drag is a total buzzkill and is generally considered a bad thing.
The drag equation is D=0.5CDRV2A where D is drag, CD is drag coefficient (which is a measure of how "draggy" a given aircraft is), R is air density, V is airspeed and A is the frontal area of the aircraft.
This equation is obviously very similar to the lift equation, and this is where designers hit the first big snag. Lift is good, but drag is bad, but because the factors that cause these forces are so similar, most measures that will increase lift will also increase drag. Most measures that reduce drag will also reduce lift.
Generally speaking, wing loading (the amount of wing area relative to the plane's weight) increased with newer aircraft models. The stall speed (the slowest possible speed at which an aircraft can fly without stalling) also increased. The massive increases in engine power alone were not sufficient to provide the increases in speed that designers wanted. They had to deliberately sacrifice lift production in order to minimize drag.
World War Two saw the introduction of laminar-flow wings. These were wings that had a cross-section (or airfoil) that generated less turbulent airflow than previous airfoil designs. However, they also generated much less lift. Watch a B-17 (which does not have a laminar-flow wing) and a B-24 (which does) take off. The B-24 eats up a lot more runway before its nose pulls up.
There are many causes of aerodynamic drag, but lift on a WWII fighter aircraft can be broken down into two major categories. There is induced drag, which is caused by wingtip vortices and is a byproduct of lift production, and parasitic drag which is everything else. Induced drag is interesting in that it actually decreases with airspeed. So for takeoff and landing it is a major consideration, but for cruising flight it is less important.
However, induced drag is also significant during combat maneuvering. Wing with a higher aspect ratio, that is, the ratio of the wingspan to the wing chord (which is the distance from the leading edge to the trailing edge of the wing) produce less induced drag.
So, for the purposes of producing good cruise efficiency, reducing induced drag was not a major consideration. For producing the best maneuvering fighter, reducing induced drag was significant.
Weight is the force counteracting lift. The more weight an aircraft has, the more lift it needs to produce. The more lift it needs to produce, the larger the wings need to be and the more drag they create. The more weight an aircraft has, the less it can carry. The more weight an aircraft has, the more sluggishly it accelerates. In general, weight is a bad thing for aircraft. But for fighters in WWII, weight wasn't entirely a bad thing. The more weight an aircraft has relative to its drag, the faster it can dive. Diving away to escape enemies if a fight was not going well was a useful tactic. The P-47, which was extremely heavy, but comparatively well streamlined, could easily out-dive the FW-190A and Bf-109G/K.
In general though, designers tried every possible trick to reduce aircraft weight. Early in the war, stressed-skin monocoque designs began to take over from the fabric-covered, built-up tube designs.
The old-style construction of the Hawker Hurricane. It's a shit plane.
Stressed-skin construction of the Spitfire, with a much better strength to weight ratio.
But as the war dragged on, designers tried even more creative ways to reduce weight. This went so far as reducing the weight of the rivets holding the aircraft together, stripping the aircraft of any unnecessary paint, and even removing or downgrading some of the guns.
An RAF Brewster Buffalo in the Pacific theater. The British downgraded the .50 caliber machine guns to .303 weapons in order to reduce weight.
In some cases, however, older construction techniques were used at the war's end due to materials shortages or for cost reasons. The German TA-152, for instance, used a large amount of wooden construction with steel reinforcement in the rear fuselage and tail in order to conserve aluminum. This was not as light or as strong as aluminum, but beggars can't be choosers.
Extensive use of (now rotten) wood in the rear fuselage of the TA-152
Generally speaking, aircraft get heavier with each variant. The Bf-109C of the late 1930s weighed 1,600 kg, but the Bf-109G of the second half of WWII had ballooned to over 2,200 kg. One notable exception was the Soviet YAK-3:
The YAK-3, which was originally designated YAK-1M, was a demonstration of what designers could accomplish if they had the discipline to keep aircraft weight as low as possible. Originally, it had been intended that The YAK-1 (which had somewhat mediocre performance vs. German fighters) would be improved by installing a new engine with more power. But all of the new and more powerful engines proved to be troublesome and unreliable. Without any immediate prospect of more engine power, the Yakovlev engineers instead improved performance by reducing weight. The YAK-3 ended up weighing nearly 300 kg less than the YAK-1, and the difference in performance was startling. At low altitude the YAK-3 had a tighter turn radius than anything the Luftwaffe had.
Thrust is the force propelling the aircraft forwards. It is generally considered a good thing. Thrust was one area where engineers could and did make improvements with very few other compromises. The art of high-output piston engine design was refined during WWII to a precise science, only to be immediately rendered obsolete by the development of jet engines.
Piston engined aircraft convert engine horsepower into thrust airflow via a propeller. Thrust was increased during WWII primarily by making the engines more powerful, although there were also some improvements in propeller design and efficiency. A tertiary source of thrust was the addition of jet thrust from the exhaust of the piston engines and from Merideth Effect radiators.
The power output of WWII fighter engines was improved in two ways; first by making the engines larger, and second by making the engines more powerful relative to their weight. Neither process was particularly straightforward or easy, but nonetheless drastic improvements were made from the war's beginning to the war's end.
The Pratt and Whitney Twin Wasp R-1830-1 of the late 1930s could manage about 750-800 horsepower. By mid-war, the R-1830-43 was putting out 1200 horsepower out of the same displacement. Careful engineering, gradual improvements, and the use of fuel with a higher and more consistent octane level allowed for this kind of improvement.
The R-1830 Twin Wasp
However, there's no replacement for displacement. By the beginning of 1943, Japanese aircraft were being massacred with mechanical regularity by a new US Navy fighter, the F6F Hellcat, which was powered by a brand new Pratt and Whitney engine, the R-2800 Double Wasp.
The one true piston engine
As you can see from the cross-section above, the R-2800 has two banks of cylinders. This is significant to fighter performance because even though it had 53% more engine displacement than the Twin Wasp (For US engines, the numerical designation indicated engine displacement in square inches), the Double Wasp had only about 21% more frontal area. This meant that a fighter with the R-2800 was enjoying an increase in power that was not proportionate with the increase in drag. Early R-2800-1 models could produce 1800 horsepower, but by war's end the best models could make 2100 horsepower. That meant a 45% increase in horsepower relative to the frontal area of the engine. Power to weight ratios for the latest model R-1830 and R-2800 were similar, while power to displacement improved by about 14%.
By war's end Pratt and Whitney had the monstrous R-4360 in production:
This gigantic engine had four rows of radially-arranged pistons. Compared to the R-2800 it produced about 50% more power for less than 10% more frontal area. Again, power to weight and power to displacement showed more modest improvements. The greatest gains were from increasing thrust with very little increase in drag. All of this was very hard for the engineers, who had to figure out how to make crankshafts and reduction gear that could handle that much power without breaking, and also how to get enough cooling air through a giant stack of cylinders.
Attempts at boosting the thrust of fighters with auxiliary power sources like rockets and ramjets were tried, but were not successful.
Yes, that is a biplane with retractable landing gear and auxiliary ramjets under the wings. Cocaine is a hell of a drug.
A secondary source of improvement in thrust came from the development of better propellers. Most of the improvement came just before WWII broke out, and by the time the war broke out, most aircraft had constant-speed propellers.
For optimal performance, the angle of attack of the propeller blades must be matched to the ratio of the forward speed of the aircraft to the circular velocity of the propeller tips. To cope with the changing requirements, constant speed or variable pitch propellers were invented that could adjust the angle of attack of the propeller blades relative to the hub.
There was also improvement in using exhaust from the engine and the waste heat from the engine to increase thrust. Fairly early on, designers learned that the enormous amount of exhaust produced by the engine could be directed backwards to generate thrust. Exhaust stacks were designed to work as nozzles to harvest this small source of additional thrust:
The exhaust stacks of the Merlin engine in a Spitfire act like jet nozzles
A few aircraft also used the waste heat being rejected by the radiator to produce a small amount of additional thrust. The Meredith Effect radiator on the P-51 is the best-known example:
Excess heat from the engine was radiated into the moving airstream that flowed through the radiator. The heat would expand the air, and the radiator was designed to use this expansion and turn it into acceleration. In essence, the radiator of the P-51 worked like a very weak ramjet. By the most optimistic projections the additional thrust from the radiator would cancel out the drag of the radiator at maximum velocity. So, it may not have provided net thrust, but it did still provide thrust, and every bit of thrust mattered.
For the most part, achieving specific design objectives in WWII fighters was a function of minimizing weight, maximizing lift, minimizing drag and maximizing thrust. But doing this in a satisfactory way usually meant emphasizing certain performance goals at the expense of others.
Top Speed, Dive Speed and Acceleration
During the 1920s and 1930s, the lack of any serious air to air combat allowed a number of crank theories on fighter design to develop and flourish. These included the turreted fighter:
The heavy fighter:
And fighters that placed far too much emphasis on turn rate at the expense of everything else:
But it quickly became clear, from combat in the Spanish Civil War, China, and early WWII, that going fast was where it was at. In a fight between an aircraft that was fast and an aircraft that was maneuverable, the maneuverable aircraft could twist and pirouette in order to force the situation to their advantage, while the fast aircraft could just GTFO the second that the situation started to sour. In fact, this situation would prevail until the early jet age when the massive increase in drag from supersonic flight made going faster difficult, and the development of heat-seeking missiles made it dangerous to run from a fight with jet nozzles pointed towards the enemy.
The top speed of an aircraft is the speed at which drag and thrust balance each other out, and the aircraft stops accelerating. Maximizing top speed means minimizing drag and maximizing thrust. The heavy fighters had a major, inherent disadvantage in terms of top speed. This is because twin engined prop fighters have three big lumps contributing to frontal area; two engines and the fuselage. A single engine fighter only has the engine, with the rest of the fuselage tucked neatly behind it. The turret fighter isn't as bad; the turret contributes some additional drag, but not as much as the twin-engine design does. It does, however, add quite a bit of weight, which cripples acceleration even if it has a smaller effect on top speed. Early-war Japanese and Italian fighters were designed with dogfight performance above all other considerations, which meant that they had large wings to generate large turning forces, and often had open cockpits for the best possible visibility. Both of these features added drag, and left these aircraft too slow to compete against aircraft that sacrificed some maneuverability for pure speed.
Drag force rises roughly as a square function of airspeed (throw this formula out the window when you reach speeds near the speed of sound). Power is equal to force times distance over time, or force times velocity. So, power consumed by drag will be equal to drag coefficient times frontal area times airspeed squared times airspeed. So, the power required for a given maximum airspeed will be a roughly cubic function. And that is assuming that the efficiency of the propeller remains constant!
Acceleration is (thrust-drag)/weight. It is possible to have an aircraft that has a high maximum speed, but quite poor acceleration and vice versa. Indeed, the A6M5 zero had a somewhat better power to weight ratio than the F6F5 Hellcat, but a considerably lower top speed. In a drag race the A6M5 would initially pull ahead, but it would be gradually overtaken by the Hellcat, which would eventually get to speeds that the zero simply could not match.
Maximum dive speed is also a function of drag and thrust, but it's a bit different because the weight of the aircraft times the sine of the dive angle also counts towards thrust. In general this meant that large fighters dove better. Drag scales with the frontal area, which is a square function of size. Weight scales with volume (assuming constant density), which is a cubic function of size. Big American fighters like the P-47 and F4U dove much faster than their Axis opponents, and could pick up speed that their opponents could not hope to match in a dive.
A number of US fighters dove so quickly that they had problems with localized supersonic airflow. Supersonic airflow was very poorly understood at the time, and many pilots died before somewhat improvisational solutions like dive brakes were added.
Ranking US ace Richard Bong takes a look at the dive brakes of a P-38
Acceleration, top speed and dive speed are all improved by reducing drag, so every conceivable trick for reducing parasitic drag was tried.
The Lockheed P-38 used flush rivets on most surfaces as well as extensive butt welds to produce the smoothest possible flight surfaces. This did reduce drag, but it also contributed to the great cost of the P-38.
The Bf 109 was experimentally flown with a V-tail to reduce drag. V-tails have lower interference drag than conventional tails, but the modification was found to compromise handling during takeoff and landing too much and was not deemed worth the small amount of additional speed.
The YAK-3 was coated with a layer of hard wax to smooth out the wooden surface and reduce drag. This simple improvement actually increased top speed by a small, but measurable amount! In addition, the largely wooden structure of the aircraft had few rivets, which meant even less drag.
The Donier DO-335 was a novel approach to solving the problem of drag in twin-engine fighters. The two engines were placed at the front and rear of the aircraft, driving a pusher and a tractor propeller. This unconventional configuration led to some interesting problems, and the war ended before these could be solved.
The J2M Raiden had a long engine cowling that extended several feet forward in front of the engine. This tapered engine cowling housed an engine-driven fan for cooling air as well as a long extension shaft of the engine to drive the propeller. This did reduce drag, but at the expense of lengthening the nose and so reducing pilot visibility, and also moving the center of gravity rearward relative to the center of lift.
Designers were already stuffing the most powerful engines coming out of factories into aircraft, provided that they were reasonably reliable (and sometimes not even then). After that, the most expedient solution to improve speed was to sacrifice lift to reduce drag and make the wings smaller. The reduction in agility at low speeds was generally worth it, and at higher speeds relatively small wings could produce satisfactory maneuverability since lift is a square function of velocity. Alternatively, so-called laminar flow airfoils (they weren't actually laminar flow) were substituted, which produced less drag but also less lift.
The Bell P-63 had very similar aerodynamics to the P-39 and nearly the same engine, but was some 80 KPH faster thanks to the new laminar flow airfoils. However, the landing speed also increased by about 40 KPH, largely sacrificing the benevolent landing characteristics that P-39 pilots loved.
The biggest problem with reducing the lift of the wings to increase speed was that it made takeoff and landing difficult. Aircraft with less lift need to get to higher speeds to generate enough lift to take off, and need to land at higher speeds as well. As the war progressed, fighter aircraft generally became trickier to fly, and the butcher's bill of pilots lost in accidents and training was enormous.
Sometimes things didn't go as planned. A fighter might be ambushed, or an ambush could go wrong, and the fighter would need to turn, turn, turn. It might need to turn to get into a position to attack, or it might need to turn to evade an attack.
Aircraft in combat turn with their wings, not their rudders. This is because the wings are way, way bigger, and therefore much more effective at turning the aircraft. The rudder is just there to make the nose do what the pilot wants it to. The pilot rolls the aircraft until it's oriented correctly, and then begins the turn by pulling the nose up. Pulling the nose up increases the angle of attack, which increases the lift produced by the wings. This produces centripetal force which pulls the plane into the turn. Since WWII aircraft don't have the benefit of computer-run fly-by-wire flight control systems, the pilot would also make small corrections with rudder and ailerons during the turn.
But, as we saw above, making more lift means making more drag. Therefore, when aircraft turn they tend to slow down unless the pilot guns the throttle. Long after WWII, Col. John Boyd (PBUH) codified the relationship between drag, thrust, lift and weight as it relates to aircraft turning performance into an elegant mathematical model called energy-maneuverability theory, which also allowed for charts that depict these relationships.
Normally, I would gush about how wonderful E-M theory is, but as it turns out there's an actual aerospace engineer named John Golan who has already written a much better explanation than I would likely manage, so I'll just link that. And steal his diagram:
E-M charts are often called "doghouse plots" because of the shape they trace out. An E-M chart specifies the turning maneuverability of a given aircraft with a given amount of fuel and weapons at a particular altitude. Turn rate is on the Y axis and airspeed is on the X axis. The aircraft is capable of flying in any condition within the dotted line, although not necessarily continuously. The aircraft is capable of flying continuously anywhere within the dotted line and under the solid line until it runs out of fuel.
The aircraft cannot fly to the left of the doghouse because it cannot produce enough lift at such a slow speed to stay in the air. Eventually it will run out of sky and hit the ground. The curved, right-side "roof" of the doghouse is actually a continuous quadratic curve that represents centrifugal force. The aircraft cannot fly outside of this curve or it or the pilot will break from G forces. Finally, the rightmost, vertical side of the doghouse is the maximum speed that the aircraft can fly at; either it doesn't have the thrust to fly faster, or something breaks if the pilot should try. The peak of the "roof" of the doghouse represents the aircraft's ideal airspeed for maximum turn rate. This is usually called the "corner velocity" of the aircraft.
So, let's look at some actual (ish) EM charts:
Now, these are taken from a flight simulator, but they're accurate enough to illustrate the point. They're also a little busier than the example above, but still easy enough to understand. The gray plot overlaid on the chart consists of G-force (the curves) and turn radius (the straight lines radiating from the graph origin). The green doghouse shows the aircraft's performance with flaps. The red curve shows the maximum sustained turn rate. You may notice that the red line terminates on the X axis at a surprisingly low top speed; that's because these charts were made for a very low altitude confrontation, and their maximum level top could only be achieved at higher altitudes. These aircraft could fly faster than the limits of the red line show, but only if they picked up extra speed from a dive. These charts could also be overlaid on each other for comparison, but in this case that would be like a graphic designer vomiting all over the screen, or a Studio Killers music video.
From these charts, we can conclude that at low altitude the P-51D enjoys many advantages over the Bf 109G-6. It has a higher top speed at this altitude, 350-something vs 320-something MPH. However, the P-51 has a lower corner speed. In general, the P-51's flight envelope at this altitude is just bigger. But that doesn't mean that the Bf 109 doesn't have a few tricks. As you can see, it enjoys a better sustained turn rate from about 175 to 325 MPH. Between those speed bands, the 109 will be able to hold on to its energy better than the pony provided it uses only moderate turns.
During turning flight, our old problem induced drag comes back to haunt fighter designers. The induced drag equation is Cdi = (Cl^2) / (pi * AR * e). Where Cdi is the induced drag coefficient, Cl is the lift coefficient, pi is the irrational constant pi, AR is aspect ratio, or wingspan squared divided by wing area, and e is not the irrational constant e but an efficiency factor.
There are a few things of interest here. For starters, induced drag increases with the square of the lift coefficient. Lift coefficient increases more or less linearly (see above) with angle of attack. There are various tricks for increasing wing lift nonlinearly, as well as various tricks for generating lift with surfaces other than the wings, but in WWII, designers really didn't use these much. So, for all intents and purposes, the induced drag coefficient will increase with the square of angle of attack, and for a given airspeed, induced drag will increase with the square of the number of Gs the aircraft is pulling. Since this is a square function, it can outrun other, linear functions easily, so minimizing the effect of induced drag is a major consideration in improving the sustained turn performance of a fighter.
To maximize turn rate in a fighter, designers needed to make the fighter as light as possible, make the engine as powerful as possible, make the wings have as much area as possible, make the wings as long and skinny as possible, and to use the most efficient possible wing shape.
You probably noticed that two of these requirements, make the plane as light as possible and make the wings as large as possible, directly contradict the requirements of good dive performance. There is simply no way to reconcile them; the designers either needed to choose one, the other, or come to an intermediate compromise. There was no way to have both great turning performance and great diving performance.
Since the designers could generally be assumed to have reduced weight to the maximum possible extent and put the most powerful engine available into the aircraft, that left the design of the wings.
The larger the wings, the more lift they generate at a given angle of attack. The lower the angle of attack, the less induced drag. The bigger wings would add more drag in level flight and reduce top speed, but they would actually reduce drag during maneuvering flight and improve sustained turn rate. A rough estimate of the turning performance of the aircraft can be made by dividing the weight of the aircraft over its wing area. This is called wing loading, and people who ought to know better put far too much emphasis on it. If you have E-M charts, you don't need wing loading. However, E-M charts require quite a bit of aerodynamic data to calculate, while wing loading is much simpler.
Giving the wings a higher aspect ratio would also improve turn performance, but the designers hands were somewhat tied in this respect. The wings usually stored the landing gear and often the armament of the fighter. In addition the wings generated the lift, and making the wings too long and skinny would make them too structurally flimsy to support the aircraft in maneuvering flight. That is, unless they were extensively reinforced, which would add weight and completely defeat the purpose. So, designers were practically limited in how much they could vary the aspect ratio of fighter wings.
The wing planform has significant effect on the efficiency factor e. The ideal shape to reduce induced drag is the "elliptical" (actually two half ellipses) wing shape used on the Supermarine spitfire.
This wing shape was, however, difficult to manufacture. By the end of the war, engineers had come up with several wing planforms that were nearly as efficient as the elliptical wing, but were much easier to manufacture.
Another way to reduce induced drag is to slightly twist the wings of the aircraft so that the wing tips point down.
This is called washout. The main purpose of washout was to improve the responsiveness of the ailerons during hard maneuvering, but it could give small efficiency improvements as well. Washout obviously complicates the manufacture of the wing, and thus it wasn't that common in WWII, although the TA-152 notably did have three degrees of tip washout.
The Bf 109 had leading edge slats that would deploy automatically at high angles of attack. Again, the main intent of these devices was to improve the control of the aircraft during takeoff and landing and hard maneuvering, but they did slightly improve the maximum angle of attack the wing could be flown at, and therefore the maximum instantaneous turn rate of the aircraft. The downside of the slats was that they weakened the wing structure and precluded the placement of guns inside the wing.
leading edge slats of a Bf 109 in the extended position
One way to attempt to reconcile the conflicting requirements of high speed and good turning capability was the "butterfly" flaps seen on Japanese Nakajima fighters.
This model of a Ki-43 shows the location of the butterfly flaps; on the underside of the wings, near the roots
These flaps would extend during combat, in the case of later Nakajima fighters, automatically, to increase wing area and lift. During level and high speed flight they would retract to reduce drag. Again, this would mainly improve handling on the left hand side of the doghouse, and would improve instantaneous turn rate but do very little for sustained turn rate.
In general, turn performance was sacrificed in WWII for more speed, as the two were difficult to reconcile. There were a small number of tricks known to engineers at the time that could improve instantaneous turn rate on fast aircraft with high wing loading, but these tricks were inadequate to the task of designing an aircraft that was very fast and also very maneuverable. Designers tended to settle for as fast as possible while still possessing decent turning performance.
Climb rate was most important for interceptor aircraft tasked with quickly getting to the level of intruding enemy aircraft. When an aircraft climbs it gains potential energy, which means it needs spare available power. The specific excess power of an aircraft is equal to V/W(T-D) where V is airspeed, W is weight, T is thrust and D is drag. Note that lift isn't anywhere in this equation! Provided that the plane has adequate lift to stay in the air and its wings are reasonably efficient at generating lift so that the D term doesn't get too high, a plane with stubby wings can be quite the climber!
The Mitsubishi J2M Raiden is an excellent example of what a fighter optimized for climb rate looked like.
A captured J2M in the US during testing
The J2M had a very aerodynamically clean design, somewhat at the expense of pilot visibility and decidedly at the expense of turn rate. The airframe was comparatively light, somewhat at the expense of firepower and at great expense to fuel capacity. Surprisingly for a Japanese aircraft, there was some pilot armor. The engine was, naturally, the most powerful available at the time. The wings, in addition to being somewhat small by Japanese standards, had laminar-flow airfoils that sacrificed maximum lift for lower drag.
The end result was an aircraft that was the polar opposite of the comparatively slow, long-ranged and agile A6M zero-sen fighters that IJN pilots were used to! But it certainly worked. The J2M was one of the fastest-climbing piston engine aircraft of the war, comparable to the F8F Bearcat.
The design requirements for climb rate were practically the same as the design requirements for acceleration, and could generally be reconciled with the design requirements for dive performance and top speed. The design requirements for turn rate were very difficult to reconcile with the design requirements for climb rate.
In maneuvering combat aircraft roll to the desired orientation and then pitch. The ability to roll quickly allows the fighter to transition between turns faster, giving it an edge in maneuvering combat.
Aircraft roll with their ailerons by making one wing generate more lift while the other wing generates less lift.
The physics from there are the same for any other rotating object. Rolling acceleration is a function of the amount of torque that the ailerons can provide divided by the moment of inertia of the aircraft about the roll axis. So, to improve roll rate, a fighter needs the lowest possible moment of inertia and the highest possible torque from its ailerons.
The FW-190A was the fighter best optimized for roll rate. Kurt Tank's design team did everything right when it came to maximizing roll rate.
The FW-190 could out-roll nearly every other piston fighter
The FW-190 has the majority of its mass near the center of the aircraft. The fuel is all stored in the fuselage and the guns are located either above the engine or in the roots of the wings. Later versions added more guns, but these were placed just outside of the propeller arc.
Twin engined fighters suffered badly in roll rate in part because the engines had to be placed far from the centerline of the aircraft. Fighters with armament far out in the wings also suffered.
The ailerons were very large relative to the size of the wing. This meant that they could generate a lot of torque. Normally, large ailerons were a problem for pilots to deflect. Most World War Two fighters did not have any hydraulic assistance; controls needed to be deflected with muscle power alone, and large controls could encounter too much wind resistance for the pilots to muscle through at high speed.
The FW-190 overcame this in two ways. The first was that, compared to the Bf 109, the cockpit was decently roomy. Not as roomy as a P-47, of course, but still a vast improvement. Cockpit space in World War Two fighters wasn't just a matter of comfort. The pilots needed elbow room in the cockpit in order to wrestle with the control stick. The FW-190 also used controls that were actuated by solid rods rather than by cables. This meant that there was less give in the system, since cables aren't completely rigid.
Additionally, the FW-190 used Frise ailerons, which have a protruding tip that bites into the wind and reduces the necessary control forces:
Several US Navy fighters, like later models of F6F and F4U used spring-loaded aileron tabs, which accomplished something similar by different means:
In these designs a spring would assist in pulling the aileron one way, and a small tab on the aileron the opposite way in order to aerodynamically move the aileron. This helped reduce the force necessary to move the ailerons at high speeds.
Another, somewhat less obvious requirement for good roll rate in fighters was that the wings be as rigid as possible. At high speeds, the force of the ailerons deflecting would tend to twist the wings of the aircraft in the opposite direction. Essentially, the ailerons began to act like servo tabs. This meant that the roll rate would begin to suffer at high speeds, and at very high speeds the aircraft might actually roll in the opposite direction of the pilot's input.
The FW-190s wings were extremely rigid. Wing rigidity is a function of aspect ratio and construction.
The FW-190 had wings that had a fairly low aspect ratio, and were somewhat overbuilt. Additionally, the wings were built as a single piece, which was a very strong and robust approach. This had the downside that damaged wings had to be replaced as a unit, however.
Some spitfires were modified by changing the wings from the original elliptical shape to a "clipped" planform that ended abruptly at a somewhat shorter span. This sacrificed some turning performance, but it made the wings much stiffer and therefore improved roll rate.
Finally, most aircraft at the beginning of the war had fabric-skinned ailerons, including many that had metal-skinned wings. Fabric-skinned ailerons were cheaper and less prone to vibration problems than metal ones, but at high speed the shellacked surface of the fabric just wasn't air-tight enough, and a significant amount of airflow would begin going into and through the aileron. This degraded their effectiveness greatly, and the substitution of metal surfaces helped greatly.
Stability and Safety
World War Two fighters were a handful. The pressures of war meant that planes were often rushed into service without thorough testing, and there were often nasty surprises lurking in unexplored corners of the flight envelope.
This is the P-51H. Even though the P-51D had been in mass production for years, it still had some lingering stability issues. The P-51H solved these by enlarging the tail. Performance was improved by a comprehensive program of drag reduction and weight reduction through the use of thinner aluminum skin.
The Bf 109 had a poor safety record in large part because of the narrow landing gear. This design kept the mass well centralized, but it made landing far too difficult for inexpert pilots.
The ammunition for the massive 37mm cannon in the P-39 and P-63 was located in the nose, and located far forward enough that depleting the ammunition significantly affected the aircraft's stability. Once the ammunition was expended, it was much more likely that the aircraft could enter dangerous spins.
The cockpit of the FW-190, while roomier than the Bf 109, had terrible forward visibility. The pilot could see to the sides and rear well enough, but a combination of a relatively wide radial engine and a hump on top of the engine cowling to house the synchronized machine guns meant that the pilot could see very little. This could be dangerous while taxiing on the ground.
So You Want to Build a Fission Bomb?
There are many reasons why one may want to build a fission bomb. Killing communists, for example, or sending a spacecraft to one of the outer planets. Building a bomb is not easy, but it can be done (see also Project, Manhattan). Even with publicly available information.
Obviously, I’m not going to detail every little bit of our hypothetical bomb down to the last millimeter of wiring. First, I don’t know all that. If I did know it, posting it here might earn me a very long vacation to ADX Florence. The stuff here is just some equations and such to give you a general impression of how the design looks.
The core of our hypothetical bomb is a sphere of highly enriched uranium. We want it to be subcritical (keff<1), but not by much. The more subcritical it is, the more we have to compress it to make it critical. In a real bomb, the core is usually surrounded by a layer of dense material such as tungsten or depleted uranium called the tamper. This helps keep the core together longer, and if it’s made of U-238, you can get some extra yield from the tamper fast fissioning. To simplify our analysis, our bomb won’t have a tamper. Then, you have a bunch of chemical explosives on the outside. This is what compresses the core, and takes it from subcritical to a super prompt critical state.
When the core is super prompt critical, it’s going to heat up very quickly. Within milliseconds, the uranium at the center is going to become hot enough to be a gas (at very high pressure). At the edge of the core, you’re going to have very high pressure uranium gas next to an area of very low pressure. This is going to result in the uranium gas blowing off very quickly. This results in a “rarefaction wave” forming, as the core progressively evaporates away. This rarefaction wave proceeds inward at the speed of sound, and once it gets far enough in, the core becomes subcritical, and the reaction stops.
Now, I’m going to make a few assumptions. These will result in some inaccuracy in our calculations, but the results will be close enough (also, it makes everything much simpler). Here they are;
1. The super prompt critical condition of the core will terminate once the rarefaction wave reaches the critical radius (rc).
2. The super prompt critical reactivity will remain constant until the core is subcritical.
3. The core is spherical with no tamper.
4. The temperature of the core is high enough that it can be treated as a photon gas (radiation pressure is the dominant force.)
5. No energy is lost to the surroundings during the process (adiabatic).
6. Our core is made of pure U-235.
Since the rarefaction wave proceeds inward at the speed of sound, the device is critical for the following period of time;
Where rmin is the radius of the core at maximum compression, and a is the speed of sound. We’ll also assume the gaseous core has a specific heat ratio of 4/3, so .
Since the process is adiabatic, we know the following;
Where Ecore is internal energy of the core at the end of the period of prompt criticality (this is the amount of energy released in the detonation). Substitute that into the speed of sound equation, and we get
Putting that aside for a moment, let’s take a look at the point kinetics equation, which describes how power increases in a reactor following a sudden increase in reactivity (our bomb is essentially a reactor that’s undergone a massive increase in reactivity);
(In this case, ρ represents reactivity, instead of density. β is the fraction of fission decay products which decay through neutron emission, and Λ is the average prompt neutron lifetime.)
The second term in that equation gives us the power contribution from delayed neutrons, so we can ignore in this case (the bomb will have long since detonated by the time they become a factor). Also, in the case of super prompt criticality, ρ >> β. So our equation reduces to
So to get the total amount of energy produced in the core during super prompt criticality, we need to integrate the power equation over the amount of time the core is super prompt critical. If we call that time tc, we get the following expression;
Where E1 is the amount of energy produced by one fission event (202.5 MeV). Substituting that into our first equation and the speed of sound expression, and then doing a bit of algebra (which I’ll leave out for the sake of brevity), we end up with this;
Solving for the Ecore expression, and defining Δr as the difference between criticality radius and the radius at maximum compression;
Which gives us the total amount of energy released by the detonation.
The main unknowns here are the reactivity (ρ) and critical radius (rc). Fortunately, both of these are fairly easy to determine. The critical radius is the radius at which a sphere of material has a keff (ratio of neutron production to neutron absorption) of 1.
ν is the average amount of neutrons produced per fission event (~2.5), Σf is the fission cross section (σf = ~1 barn for fast neutrons), D is the thermal neutron diffusion distance (.00434m for U-235), Bg is the ‘geometric buckling’, and Σa is the absorption cross section (σa=~.09 barns for fast neutrons). Convert from σf to Σf using the following formulas;
Bg for a sphere can be calculated using the following formula;
Setting keff to 1 and solving for r, we find that the critical radius rc is roughly 5.2cm. A sphere of U-235 of this size will have a mass of about 11.25kg.
Now that we have the keff equation, determining ρ is fairly simple.
Since keff is going to be higher the more you compress the core, you obviously want to compress it as much as possible. The following equation gives the amount of explosive needed to compress the core by a given amount;
Escfc is the amount of energy needed to compress the core by a given amount.
η is the amount of energy contained in each unit of chemical explosive (4184kJ/kg for TNT), and ε is the efficiency of the implosion process. ε is about 30% in well-designed nuclear weapons, crude designs are probably closer to 5-10%.
Congratulations! Now you have (a non-trivial portion of) the knowledge you need to build a working fission device!
Edit: Updated 4/24 with corrected cross sections