Jump to content
Sturgeon's House

A Quick Explanation of Forward Swept Wings


Recommended Posts



Every so often someone asks a question about the advantages of forward-swept wings, and usually they get a shitty half-assed answer about how they somehow improve maneuverability and stuff.  I will attempt to provide a fully-assed answer.


The short version is that forward swept wings do roughly the same thing as conventional aft swept wings; they increase critical mach number.  I found an excellent video explaining transonic effects, so watch that first if you don't already know what that is.


Typically, a straight wing starts experiencing shock wave buildup at around mach .7.  These effects are generally bad; control surfaces lose effectiveness, the aircraft's center of lift moves, stability can decrease, and drag greatly increases.


It's generally desirable to delay the onset of this badness.  The critical mach number is strongly affected by the thickness to chord ratio:







So, critical mach number could be increased by having really thin wings.  The F-104 does this, but at the expense of having ridiculously tiny wings that generate barely any lift and no internal volume for fuel storage.


Critical mach number could also be delayed by having wings with a normal thickness, but very long chord.  This would improve the supersonic performance of the wing, but subsonic drag would be negatively affected, because the wing would have a large amount of induced drag, and additional wetted area that would cause more drag.


Finally, the wing could be swept.  This would increase the chord length relative to the airflow, but would not give the wing undue surface area and thus subsonic drag.


In theory, the critical mach number could be increased by a factor equal to the inverse of the cosine of the sweep angle (much like calculating the LOS thickness of tank armor, and for the same reason), but secondary effects mean that it's less effective than this.  The practical effect of sweep on drag coefficient looks about like this:



(from Design for Air Combat)


This, incidentally, is why the ME-262 doesn't really have swept wings.  The change in Mcr is basically negligible for any leading edge sweep under thirty degrees.


Note that this logic applies whether the wings are swept forwards or backwards; as far as delaying and reducing the transonic effects, forward or rearward sweep should be equally effective.


There are some secondary effects that make forward-swept wings more desirable.  One of these is spanwise flow:




In any swept wing, the air isn't just flowing over the wing, it's flowing across them as well.  This means that while pulling Gs the tips of the wings will stall first.  Since the tips aren't producing lift anymore, but the rest of the wing is, the center of lift of the wing moves forward, which means that there's more pitch-up torque on the plane, which means that the nose goes up even more and the stall gets worse.  This is known as the "sabre dance," as the F-100 displayed this undesirable property.  With the wings forward swept, the root of the wings would stall first (although in practice, forward swept wing aircraft tend to have the wings attached well aft, so the CL still shifts forward during a stall)


To make matters worse, the air spilling out sideways and the early stall interfere with the effectiveness of the ailerons, which means that the aircraft can lose roll control effectiveness as it increases AOA.  This is a particularly alarming behavior during landing, as speed is low, AOA is high, and keeping the aircraft level is of paramount importance.


Additionally, the air spilling out outwards towards the wingtips reduces lift.  Reducing this bad behavior increases lift coefficient, therefore.


So, forward swept wings are a little more efficient, aerodynamically than aft swept wings.  Why aren't they more popular?


The problem is something called aeroelastic divergence; which is engineer-speak for "the goddamn wings try to tear themselves off."  I will attempt to illustrate with the finest MS pain diagrams:




The amount of lift that a wing generates is a function of the angle of attack.  The wing will generate more lift the more inclined it is relative to the airflow.


Wings in the real world are, of course, not perfectly rigid, so when they generate lift in order to pull the weight of the fuselage through the sky, they bend slightly.


In swept wings, the wings aren't just bending, they're twisting as well because the center of lift is not aligned with the structural connection between the fuselage.


In an aft-swept wing, the force of the lift tends to twist the wings downwards.  Increasing the angle of attack will increase the lift, which will increase this downward twist, which is a naturally self-limiting (negative feedback) arrangement.


In a forward-swept wing, it's exactly the opposite.  When the angle of attack increases, lift increases and the wings twist themselves upwards, which increases lift even more which increases the twisting...


This is why forward-swept wings had to wait until magical composites with magical properties were available.

Link to comment
Share on other sites

So the body of the aircraft becomes one huge-ass wing fence?


I'm still not sure if I understand the idea behind it helping improve the ratio of chord to height - surely wing area (and so wetted surface) would be constant? Sure, the chord decreases (measured normal to the leading edge), but the length increases by the same amount so I don't see why it's different to a straight wing with the same chord measured along the airflow. I do know a guy who's meant to know a lot about this kind of stuff though, so I will talk to him and get back to you.

Link to comment
Share on other sites

So the body of the aircraft becomes one huge-ass wing fence?


I'm still not sure if I understand the idea behind it helping improve the ratio of chord to height - surely wing area (and so wetted surface) would be constant? Sure, the chord decreases (measured normal to the leading edge), but the length increases by the same amount so I don't see why it's different to a straight wing with the same chord measured along the airflow. I do know a guy who's meant to know a lot about this kind of stuff though, so I will talk to him and get back to you.


It's an alternative method to improving the aspect ratio, not a method of improving the aspect ratio.



Didn't stop zee Germans from trying



Proving yet again why we take manufacturer's claims with a salt mine until the damn thing flies and exhibits those selfsame characteristics because a design made by people who don't know the problems and a design made by people who have solved the problems can look similar and even have reasonably similar expected performance (if the means of making it actually work aren't too expensive).

Link to comment
Share on other sites

I'm still not sure if I understand the idea behind it helping improve the ratio of chord to height - surely wing area (and so wetted surface) would be constant? Sure, the chord decreases (measured normal to the leading edge), but the length increases by the same amount so I don't see why it's different to a straight wing with the same chord measured along the airflow. I do know a guy who's meant to know a lot about this kind of stuff though, so I will talk to him and get back to you.


I felt that I understood the issue quite well until you asked that question.


I'm really not sure what the advantage of a swept wing over a straight wing with the same trigonometric wing chord and span is WRT Mcr.  Apparently there are some secondary advantages to swept wings; they suffer a smaller transonic center of lift movement as a percentage of MAC, but surely that isn't the deciding factor?


Proving yet again why we take manufacturer's claims with a salt mine until the damn thing flies and exhibits those selfsame characteristics because a design made by people who don't know the problems and a design made by people who have solved the problems can look similar and even have reasonably similar expected performance (if the means of making it actually work aren't too expensive).


According to Design for Air Combat, you can get away with about fifteen degrees of forward sweep before the aeroelastic problems become too much for conventional metal structures to deal with.  As you can see from the chart above, this would do basically nothing for transonic peformance.


I believe that in the Hansa jet and JU-287, the forward sweep is intended to move the wing roots backwards so that there's more unobstructed cabin/bomb bay space.

Link to comment
Share on other sites



Planes. Planes. Planes. Planes.


Wait a tick...





Is that a redhead? Or a faded photo.


Doesn't matter to me.


Planes and redheads.


Don's interested is suddenly piqued.

How else was I suppose to get you to post in this thread? I don't know who she is, but she apparently was somehow involved with a FSW version of the F-16 which is pretty hot.


Spanwise airflow patterns on a FSW Mig-23 model. 


Link to comment
Share on other sites

Here you can see one of the issues with FSW; the area behind the wings is in some pretty hairy, turbulent airflow which reduces the effectiveness of any control surfaces that live there.  For this reason, most FSW designs feature canards.  I'm not entirely sure what the rationale behind having both canards and tailplanes on the berkut is.

Link to comment
Share on other sites

Here you can see one of the issues with FSW; the area behind the wings is in some pretty hairy, turbulent airflow which reduces the effectiveness of any control surfaces that live there.  For this reason, most FSW designs feature canards.  I'm not entirely sure what the rationale behind having both canards and tailplanes on the berkut is.


I wonder what effect combining forward swept wings and thrust vectoring would have?

Link to comment
Share on other sites

Here you can see one of the issues with FSW; the area behind the wings is in some pretty hairy, turbulent airflow which reduces the effectiveness of any control surfaces that live there.  For this reason, most FSW designs feature canards.  I'm not entirely sure what the rationale behind having both canards and tailplanes on the berkut is.

Here is a model of the F-18 using about the same colored smoke system for comparative purposes. 


I don't see quite as much messiness behind the wing.


I would think the tailwing wouldn't be too effective on a FSW plane but could still be somewhat useful as a elevator. Then again, I'm just a little smarter than Donward on this subject. 

Link to comment
Share on other sites

Thrust vectoring works its magic best, as I understand it, when conventional control surfaces have lost their effectiveness.  When the plane is at very high altitude, or when it's not moving fast enough, or when the wings are mostly stalled, turning the thrust sideways will still produce pitch or roll moments or what have you.  FSW+thrust vectoring might be a good combination for short takeoff and landing.


The SU-47, as I understand it, began as a design study for a Soviet carrier bird.  The Soviets had just started doing carriers in the 1980s, and were discovering that landing an airplane on a boat is tricky.  Very tricky.  The SU-33 navalized flanker worked well enough, but if they could design something that was easier to land on a carrier, the Naval Aviation pilots would be much obliged.  The forward swept wing was to give better lift for a given AOA, which would either allow the pilot to slow down or allow the pilot to go the same speed, but have the nose closer to level (both good things), and the better roll authority at high AOA of the FSW would better allow them to keep the wings level while landing on a carrier, which is also kind of important.


Eventually the design evolved into a next-generation, land-based replacement for the SU-27 family.  There weren't really any takers, so the SU-47 ended its flight-worthy days as a technology tester for the PAK-FA program.


It is remarkable, given the state of the Russian economy, that Sukhoi has been able to debut two new fighter designs in the post-Soviet era, as well as continuing to update the SU-27 design.  I'm sure Lockheed Martin is thankful for Sukhoi's ingenuity, and ability to make scary-looking fighter planes that they might export to rogue nations on the limited R&D budget they have available.


Here is a model of the F-18 using about the same colored smoke system for comparative purposes. 


I don't see quite as much messiness behind the wing.


I would think the tailwing wouldn't be too effective on a FSW plane but could still be somewhat useful as a elevator. Then again, I'm just a little smarter than Donward on this subject. 


think that in that picture, most of the turbulence is coming off of the LERX (the highly swept surfaces that blend between the wings and the fuselage) and not the wing.  That is actually what they're supposed to do; by some black magic, the vortex from the LERX adds energy to the flow over the wings, which helps delay the wings from stalling.


They don't seem to be mucking up the flow over the horizontal stabilizers that much, but I know that the baby hornet had issues with the turbulence from the LERX buffeting the vertical stabilizers, which was causing the vertical stabilizers to wear out faster than anticipated.  This was fixed on the super hornet by re-sizing everything a little.

Link to comment
Share on other sites

  • 2 weeks later...


I'm still not sure if I understand the idea behind it helping improve the ratio of chord to height - surely wing area (and so wetted surface) would be constant? Sure, the chord decreases (measured normal to the leading edge), but the length increases by the same amount so I don't see why it's different to a straight wing with the same chord measured along the airflow. I do know a guy who's meant to know a lot about this kind of stuff though, so I will talk to him and get back to you.


OK, I think I found the answer to this.


A straight wing with an equal chord/height ratio would actually have a slightly better Mcrit than a comparable swept wing.


However, in supersonic flight a straight wing would stick out the sides of the mach cone created by the nose of the aircraft, while swept wings could fit inside.  Having your wings stick outside the mach cone is a PITA for various reasons.

Link to comment
Share on other sites

They probably marginally degraded the stall characteristics, and provided a very slight increase in critical mach number and drag-wise mach number that the engines were too weak to propel the airframe to anyway.  By the inverse cosine rule I get that under ideal conditions an 18.5 degree sweep would provide a 5.4% increase in Mcrit, and in actuality it would have been less than that.

Link to comment
Share on other sites

OK, I think I found the answer to this.


A straight wing with an equal chord/height ratio would actually have a slightly better Mcrit than a comparable swept wing.


However, in supersonic flight a straight wing wound stick out the sides of the mach cone created by the nose of the aircraft, while swept wings could fit inside.  Having your wings stick outside the mach cone is a PITA for various reasons.


That makes sense and jives with what I've heard elsewhere.

Link to comment
Share on other sites

  • 9 months later...





The sweep angle looks very modest on this, so I suspect that the forward sweep may be for structural reasons (to place the wing spar in a more convenient location, for instance) rather than aerodynamic ones.


NASA also uploaded a very nice video of aeroelastic divergence in a FSW:



Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Similar Content

    • By LostCosmonaut
      Compared to the most well known Japanese fighter of World War 2, the A6M “Zero”, the J2M Raiden (“Jack”) was both less famous and less numerous. More than 10,000 A6Ms were built, but barely more than 600 J2Ms were built. Still, the J2M is a noteworthy aircraft. Despite being operated by the Imperial Japanese Navy (IJN), it was a strictly land-based aircraft. The Zero was designed with a lightweight structure, to give extreme range and maneuverability. While it had a comparatively large fuel tank, it was lightly armed, and had virtually no armor. While the J2M was also very lightly built, it was designed that way to meet a completely different set of requirements; those of a short-range interceptor. The J2M's design led to it being one of the fastest climbing piston-engine aircraft in World War 2, even though its four 20mm cannons made it much more heavily armed than most Japanese planes.

      Development of the J2M began in October 1938, under the direction of Jiro Hirokoshi, in response to the issuance of the 14-shi interceptor requirement (1). Hirokoshi had also designed the A6M, which first flew in April 1939. However, development was slow, and the J2M would not make its first flight until 20 March 1942, nearly 3 ½ years later (2). Initially, this was due to Mitsubishi's focus on the A6M, which was further along in development, and of vital importance to the IJN's carrier force. Additionally, the J2M was designed to use a more powerful engine than other Japanese fighters. The first aircraft, designated J2M1, was powered by an MK4C Kasei 13 radial engine, producing 1430 horsepower from 14 cylinders (3) (compare to 940 horsepower for the A6M2) and driving a three bladed propeller. The use of such a powerful engine was driven by the need for a high climb rate, in order to fulfill the requirements set forth in the 14-shi specification.
      The climb rate of an aircraft is driven by specific excess power; by climbing an aircraft is gaining potential energy, which requires power to generate. Specific Excess Power is given by the following equation;
      It is clear from this equation that weight and drag must be minimized, while thrust and airspeed are maximized. The J2M was designed using the most powerful engine then available, to maximize thrust. Moreover, the engine was fitted with a long cowling, with the propeller on an extension shaft, also to minimize drag. In a more radical departure from traditional Japanese fighter design (as exemplified by aircraft such as the A6M and Ki-43), the J2M had comparatively short, stubby wings, only 10.8 m wide on the J2M3 variant, with a relatively high wing loading of 1.59 kN/m2 (33.29 lb/ft2) (2). (It should be noted that this wing loading is still lower than contemporary American aircraft such as the F6F Hellcat. The small wings reduced drag, and also reduced weight. More weight was saved by limiting the J2M's internal fuel, the J2M3 had only 550 liters of internal fuel (2).
      Hirokoshi did add some weight back into the J2M's design. 8 millimeters of steel armor plate protected the pilot, a luxurious amount of protection compared to the Zero. And while the J2M1 was armed with the same armament as the A6M (two 7.7mm machine guns and two Type 99 Model 2 20mm cannons), later variants would be more heavily armed, with the 7.7mm machine guns deleted in favor of an additional pair of 20mm cannons. Doubtlessly, this was driven by Japanese wartime experience; 7.7mm rounds were insufficient to deal with strongly built Grumman fighters, let alone a target like the B-17.
      The first flight of the J2M Raiden was on March 20th, 1942. Immediately, several issues were identified. One design flaw pointed out quickly was that the cockpit design on the J2M1, coupled with the long cowling, severely restricted visibility. (This issue had been identified by an IJN pilot viewing a mockup of the J2M back in December 1940 (1).) The landing speed was also criticized for being too high; while the poor visibility over the nose exacerbated this issue, pilots transitioning from the Zero would be expected to criticize the handling of a stubby interceptor.

      Wrecked J2M in the Philippines in 1945. The cooling fan is highly visible.
      However, the biggest flaw the J2M1 had was poor reliability. The MK4C engine was not delivering the expected performance, and the propeller pitch control was unreliable, failing multiple times. (1) As a result, the J2M1 failed to meet the performance set forth in the 14-shi specification, achieving a top speed of only 577 kph, well short of the 600 kph required. Naturally, the climb rate suffered as well. Only a few J2M1s were built.
      The next version, the J2M2, had several improvements. The engine was updated to the MK4R-A (3); this engine featured a methanol injection system, enabling it to produce up to 1,800 horsepower for short periods. The propeller was switched for a four blade unit. The extension shaft in the J2M1 had proved unreliable, in the J2M2 the cowling was shortened slightly, and a cooling fan was fitted at the the front. These modifications made the MK4R-A more reliable than the previous engine, despite the increase in power.
      However, there were still problems; significant vibrations occurred at certain altitudes and speeds; stiffening the engine mounts and propeller blades reduced these issues, but they were never fully solved (1). Another significant design flaw was identified in the summer of 1943; the shock absorber on the tail wheel could jam the elevator controls when the tailwheel retracted, making the aircraft virtually uncontrollable. This design flaw led to the death of one IJN pilot, and nearly killed two more (1). Ultimately, the IJN would not put the J2M2 into service until December 1943, 21 months after the first flight of the J2M1. 155 J2M2s would be built by Mitsubishi (3).
      By the time the J2M2 was entering service, the J2M3 was well into testing. The J2M3 was the most common variant of the Raiden, 260 were produced at Mitsubishi's factories (3). It was also the first variant to feature an armament of four 20mm cannons (oddly, of two different types of cannon with significantly different ballistics (2); the 7.7mm machine guns were replace with two Type 99 Model 1 cannons). Naturally, the performance of the J2M3 suffered slightly with the heavier armament, but it still retained its excellent rate of climb. The Raiden's excellent rate of climb was what kept it from being cancelled as higher performance aircraft like the N1K1-J Shiden came into service.

      The J2M's was designed to achieve a high climb rate, necessary for its intended role as an interceptor. The designers were successful; the J2M3, even with four 20mm cannons, was capable of climbing at 4650 feet per minute (1420 feet per minute) (2). Many fighters of World War 2, such as the CW-21, were claimed to be capable of climbing 'a mile a minute', but the Raiden was one of the few piston-engine aircraft that came close to achieving that mark. In fact, the Raiden climbed nearly as fast as the F8F Bearcat, despite being nearly three years older. Additionally, the J2M could continue to climb at high speeds for long periods; the J2M2 needed roughly 10 minutes to reach 30000 feet (9100 meters) (4), and on emergency power (using the methanol injection system), could maintain a climb rate in excess of 3000 feet per minute up to about 20000 feet (about 6000 meters).


      Analysis in Source (2) shows that the J2M3 was superior in several ways to one of its most common opponents, the F6F Hellcat. Though the Hellcat was faster at lower altitudes, the Raiden was equal at 6000 meters (about 20000 feet), and above that rapidly gained superiority. Additionally, the Raiden, despite not being designed for maneuverability, still had a lower stall speed than the Hellcat, and could turn tighter. The J2M3 actually had a lower wing loading than the American plane, and had flaps that could be used in combat to expand the wing area at will. As shown in the (poorly scanned) graphs on page 39 of (2), the J2M possessed a superior instantaneous turn capability to the F6F at all speeds. However, at high speeds the sustained turn capability of the American plane was superior (page 41 of (2)).
      The main area the American plane had the advantage was at high speeds and low altitudes; with the more powerful R-2800, the F6F could more easily overcome drag than the J2M. The F6F, as well as most other American planes, were also more solidly built than the J2M. The J2M also remained plagued by reliability issues throughout its service life.
      In addition to the J2M2 and J2M3 which made up the majority of Raidens built, there were a few other variants. The J2M4 was fitted with a turbo-supercharger, allowing its engine to produce significantly more power at high altitudes (1). However, this arrangement was highly unreliable, and let to only two J2M4s being built. Some sources also report that the J2M4 had two obliquely firing 20mm Type 99 Model 2 cannons in the fuselage behind the pilot (3). The J2M5 used a three stage mechanical supercharger, which proved more reliable than the turbo-supercharger, and still gave significant performance increases at altitude. Production of the J2M5 began at Koza 21st Naval Air Depot in late 1944 (6), but ultimately only about 34 would be built (3). The J2M6 was developed before the J2M4 and J2M6, it had minor updates such as an improved bubble canopy, only one was built (3). Finally, there was the J2M7, which was planned to use the same engine as the J2M5, with the improvements of the J2M6 incorporated. Few, if any, of this variant were built (3).
      A total of 621 J2Ms were built, mostly by Mitsubishi, which produced 473 airframes (5). However, 128 aircraft (about 1/5th of total production), were built at the Koza 21st Naval Air Depot (6). In addition to the reliability issues which delayed the introduction of the J2M, production was also hindered by American bombing, especially in 1945. For example, Appendix G of (5) shows that 270 J2Ms were ordered in 1945, but only 116 were produced in reality. (Unfortunately, sources (5) and (6) do not distinguish between different variants in their production figures.)
      Though the J2M2 variant first flew in October 1942, initial production of the Raiden was very slow. In the whole of 1942, only 13 airframes were produced (5). This included the three J2M1 prototypes. 90 airframes were produced in 1943, a significant increase over the year before, but still far less than had been ordered (5), and negligible compared to the production of American types. Production was highest in the spring and summer of 1944 (5), before falling off in late 1944 and 1945.
      The initial J2M1 and J2M2 variants were armed with a pair of Type 97 7.7mm machine guns, and two Type 99 Model 2 20mm cannons. The Type 97 used a 7.7x56mm rimmed cartridge; a clone of the .303 British round (7). This was the same machine gun used on other IJN fighters such as the A5M and A6M. The Type 99 Model 2 20mm cannon was a clone of the Swiss Oerlikon FF L (7), and used a 20x101mm cartridge.
      The J2M3 and further variants replaced the Type 97 machine guns with a pair of Type 99 Model 1 20mm cannons. These cannons, derived from the Oerlikon FF, used a 20x72mm cartridge (7), firing a round with roughly the same weight as the one used in the Model 2 at much lower velocity (2000 feet per second vs. 2500 feet per second (3), some sources (7) report an even lower velocity for the Type 99). The advantage the Model 1 had was lightness; it weighed only 26 kilograms vs. 34 kilograms for the model 2. Personally, I am doubtful that saving 16 kilograms was worth the difficulty of trying to use two weapons with different ballistics at the same time. Some variants (J2M3a, J2M5a) had four Model 2 20mm cannons (3), but they seem to be in the minority.

      In addition to autocannons and machine guns, the J2M was also fitted with two hardpoints which small bombs or rockets could be attached to (3) (4). Given the Raiden's role as an interceptor, and the small capacity of the hardpoints (roughly 60 kilograms) (3), it is highly unlikely that the J2M was ever substantially used as a bomber. Instead, it is more likely that the hardpoints on the J2M were used as mounting points for large air to air rockets, to be used to break up bomber formations, or ensure the destruction of a large aircraft like the B-29 in one hit. The most likely candidate for the J2M's rocket armament was the Type 3 No. 6 Mark 27 Bomb (Rocket) Model 1. Weighing 145 pounds (65.8 kilograms) (8), the Mark 27 was filled with payload of 5.5 pounds of incendiary fragments; upon launch it would accelerate to high subsonic speeds, before detonating after a set time (8). It is also possible that the similar Type 3 No. 1 Mark 28 could have been used; this was similar to the Mark 27, but much smaller, with a total weight of only 19.8 pounds (9 kilograms).
      The first unit to use the J2M in combat was the 381st Kokutai (1). Forming in October 1943, the unit at first operated Zeros, though gradually it filled with J2M2s through 1944. Even at this point, there were still problems with the Raiden's reliability. On January 30th, a Japanese pilot died when his J2M simply disintegrated during a training flight. By March 1944, the unit had been dispatched to Balikpapan, in Borneo, to defend the vital oil fields and refineries there. But due to the issues with the J2M, it used only Zeros. The first Raidens did not arrive until September 1944 (1). Reportedly, it made its debut on September 30th, when a mixed group of J2Ms and A6Ms intercepted a formation of B-24s attacking the Balikpapan refineries. The J2Ms did well for a few days, until escorting P-47s and P-38s arrived. Some 381st Raidens were also used in defense of Manila, in the Phillipines, as the Americans retook the islands. (9) By 1945, all units were ordered to return to Japan to defend against B-29s and the coming invasion. The 381st's J2Ms never made it to Japan; some ended up in Singapore, where they were found by the British (1).

      least three units operated the J2M in defense of the home islands of Japan; the 302nd, 332nd, and 352nd Kokutai. The 302nd's attempted combat debut came on November 1st, 1944, when a lone F-13 (reconaissance B-29) overflew Tokyo (1). The J2Ms, along with some Zeros and other fighters, did not manage to intercept the high flying bomber. The first successful attack against the B-29s came on December 3rd, when the 302nd shot down three B-29s. Later that month the 332nd first engaged B-29s attacking the Mitsubishi plant on December 22nd, shooting down one. (1)
      The 352nd operated in Western Japan, against B-29s flying out of China in late 1944 and early 1945. At first, despite severe maintenace issues, they achieved some successes, such as on November 21st, when a formation of B-29s flying at 25,000 feet was intercepted. Three B-29s were shot down, and more damaged.

      In general, when the Raidens were able to get to high altitude and attack the B-29s from above, they were relatively successful. This was particularly true when the J2Ms were assigned to intercept B-29 raids over Kyushu, which were flown at altitudes as low as 16,000 feet (1). The J2M also had virtually no capability to intercept aircraft at night, which made them essentially useless against LeMay's incendiary raids on Japanese cities. Finally the arrival of P-51s in April 1945 put the Raidens at a severe disadvantage; the P-51 was equal to or superior to the J2M in almost all respects, and by 1945 the Americans had much better trained pilots and better maintained machines. The last combat usage of the Raiden was on the morning of August 15th. The 302nd's Raidens and several Zeros engaged several Hellcats from VF-88 engaged in strafing runs. Reportedly four Hellcats were shot down, for the loss of two Raidens and at least one Zero(1). Japan surrendered only hours later.

      At least five J2Ms survived the war, though only one intact Raiden exists today. Two of the J2Ms were captured near Manila on February 20th, 1945 (9) (10). One of them was used for testing; but only briefly. On its second flight in American hands, an oil line in the engine failed, forcing it to land. The aircraft was later destroyed in a ground collision with a B-25 (9). Two more were found by the British in Singapore (1), and were flown in early 1946 but ex-IJN personnel (under close British supervision). The last Raiden was captured in Japan in 1945, and transported to the US. At some point, it ended up in a park in Los Angeles, before being restored to static display at the Planes of Fame museum in California.

      F6F-5 vs. J2M3 Comparison
      Further reading:
      An additional two dozen Raiden photos: https://www.worldwarphotos.info/gallery/japan/aircrafts/j2m-raiden/
    • By Collimatrix
      But if you try sometimes...

      Fighter aircraft became much better during the Second World War.  But, apart from the development of engines, it was not a straightforward matter of monotonous improvement.  Aircraft are a series of compromises.  Improving one aspect of performance almost always compromises others.  So, for aircraft designers in World War Two, the question was not so much "what will we do to make this aircraft better?" but "what are we willing to sacrifice?"

      To explain why, let's look at the forces acting on an aircraft:

      Lift is the force that keeps the aircraft from becoming one with the Earth.  It is generally considered a good thing. 
      The lift equation is L=0.5CLRV2A where L is lift, CL is lift coefficient (which is a measure of the effectiveness of the wing based on its shape and other factors), R is air density, V is airspeed and A is the area of the wing.

      Airspeed is very important to an aircraft's ability to make lift, since the force of lift grows with the square of airspeed and in linear relation to all other factors.  This means that aircraft will have trouble producing adequate lift during takeoff and landing, since that's when they slow down the most.
      Altitude is also a significant factor to an aircraft's ability to make lift.  The density of air decreases at an approximately linear rate with altitude above sea level:

      Finally, wings work better the bigger they are.  Wing area directly relates to lift production, provided that wing shape is kept constant.

      While coefficient of lift CL contains many complex factors, one important and relatively simple factor is the angle of attack, also called AOA or alpha.  The more tilted an airfoil is relative to the airflow, the more lift it will generate.  The lift coefficient (and thus lift force, all other factors remaining equal) increases more or less linearly until the airfoil stalls:

      Essentially what's going on is that the greater the AOA, the more the wing "bends" the air around the wing.  But the airflow can only become so bent before it detaches.  Once the wing is stalled it doesn't stop producing lift entirely, but it does create substantially less lift than it was just before it stalled.  

      Drag is the force acting against the movement of any object travelling through a fluid.  Since it slows aircraft down and makes them waste fuel in overcoming it, drag is a total buzzkill and is generally considered a bad thing.

      The drag equation is D=0.5CDRV2A where D is drag, CD is drag coefficient (which is a measure of how "draggy" a given aircraft is), R is air density, V is airspeed and A is the frontal area of the aircraft.

      This equation is obviously very similar to the lift equation, and this is where designers hit the first big snag.  Lift is good, but drag is bad, but because the factors that cause these forces are so similar, most measures that will increase lift will also increase drag.  Most measures that reduce drag will also reduce lift.

      Generally speaking, wing loading (the amount of wing area relative to the plane's weight) increased with newer aircraft models.  The stall speed (the slowest possible speed at which an aircraft can fly without stalling) also increased.  The massive increases in engine power alone were not sufficient to provide the increases in speed that designers wanted.  They had to deliberately sacrifice lift production in order to minimize drag.
      World War Two saw the introduction of laminar-flow wings.  These were wings that had a cross-section (or airfoil) that generated less turbulent airflow than previous airfoil designs.  However, they also generated much less lift.  Watch a B-17 (which does not have a laminar-flow wing) and a B-24 (which does) take off.  The B-24 eats up a lot more runway before its nose pulls up.

      There are many causes of aerodynamic drag, but lift on a WWII fighter aircraft can be broken down into two major categories.  There is induced drag, which is caused by wingtip vortices and is a byproduct of lift production, and parasitic drag which is everything else.  Induced drag is interesting in that it actually decreases with airspeed.  So for takeoff and landing it is a major consideration, but for cruising flight it is less important.

      However, induced drag is also significant during combat maneuvering.  Wing with a higher aspect ratio, that is, the ratio of the wingspan to the wing chord (which is the distance from the leading edge to the trailing edge of the wing) produce less induced drag.

      So, for the purposes of producing good cruise efficiency, reducing induced drag was not a major consideration.  For producing the best maneuvering fighter, reducing induced drag was significant.

      Weight is the force counteracting lift.  The more weight an aircraft has, the more lift it needs to produce.  The more lift it needs to produce, the larger the wings need to be and the more drag they create.  The more weight an aircraft has, the less it can carry.  The more weight an aircraft has, the more sluggishly it accelerates.  In general, weight is a bad thing for aircraft.  But for fighters in WWII, weight wasn't entirely a bad thing.  The more weight an aircraft has relative to its drag, the faster it can dive.  Diving away to escape enemies if a fight was not going well was a useful tactic.  The P-47, which was extremely heavy, but comparatively well streamlined, could easily out-dive the FW-190A and Bf-109G/K.

      In general though, designers tried every possible trick to reduce aircraft weight.  Early in the war, stressed-skin monocoque designs began to take over from the fabric-covered, built-up tube designs.

      The old-style construction of the Hawker Hurricane.  It's a shit plane.

      Stressed-skin construction of the Spitfire, with a much better strength to weight ratio.
      But as the war dragged on, designers tried even more creative ways to reduce weight.  This went so far as reducing the weight of the rivets holding the aircraft together, stripping the aircraft of any unnecessary paint, and even removing or downgrading some of the guns.

      An RAF Brewster Buffalo in the Pacific theater.  The British downgraded the .50 caliber machine guns to .303 weapons in order to reduce weight.
      In some cases, however, older construction techniques were used at the war's end due to materials shortages or for cost reasons.  The German TA-152, for instance, used a large amount of wooden construction with steel reinforcement in the rear fuselage and tail in order to conserve aluminum.  This was not as light or as strong as aluminum, but beggars can't be choosers.

      Extensive use of (now rotten) wood in the rear fuselage of the TA-152
      Generally speaking, aircraft get heavier with each variant.  The Bf-109C of the late 1930s weighed 1,600 kg, but the Bf-109G of the second half of WWII had ballooned to over 2,200 kg.  One notable exception was the Soviet YAK-3:

      The YAK-3, which was originally designated YAK-1M, was a demonstration of what designers could accomplish if they had the discipline to keep aircraft weight as low as possible.  Originally, it had been intended that The YAK-1 (which had somewhat mediocre performance vs. German fighters) would be improved by installing a new engine with more power.  But all of the new and more powerful engines proved to be troublesome and unreliable.  Without any immediate prospect of more engine power, the Yakovlev engineers instead improved performance by reducing weight.  The YAK-3 ended up weighing nearly 300 kg less than the YAK-1, and the difference in performance was startling.  At low altitude the YAK-3 had a tighter turn radius than anything the Luftwaffe had.  
      Thrust is the force propelling the aircraft forwards.  It is generally considered a good thing.  Thrust was one area where engineers could and did make improvements with very few other compromises.  The art of high-output piston engine design was refined during WWII to a precise science, only to be immediately rendered obsolete by the development of jet engines.
      Piston engined aircraft convert engine horsepower into thrust airflow via a propeller.  Thrust was increased during WWII primarily by making the engines more powerful, although there were also some improvements in propeller design and efficiency.  A tertiary source of thrust was the addition of jet thrust from the exhaust of the piston engines and from Merideth Effect radiators.
      The power output of WWII fighter engines was improved in two ways; first by making the engines larger, and second by making the engines more powerful relative to their weight.  Neither process was particularly straightforward or easy, but nonetheless drastic improvements were made from the war's beginning to the war's end.

      The Pratt and Whitney Twin Wasp R-1830-1 of the late 1930s could manage about 750-800 horsepower.  By mid-war, the R-1830-43 was putting out 1200 horsepower out of the same displacement.  Careful engineering, gradual improvements, and the use of fuel with a higher and more consistent octane level allowed for this kind of improvement.

      The R-1830 Twin Wasp

      However, there's no replacement for displacement.  By the beginning of 1943, Japanese aircraft were being massacred with mechanical regularity by a new US Navy fighter, the F6F Hellcat, which was powered by a brand new Pratt and Whitney engine, the R-2800 Double Wasp.

      The one true piston engine

      As you can see from the cross-section above, the R-2800 has two banks of cylinders.  This is significant to fighter performance because even though it had 53% more engine displacement than the Twin Wasp (For US engines, the numerical designation indicated engine displacement in square inches), the Double Wasp had only about 21% more frontal area.  This meant that a fighter with the R-2800 was enjoying an increase in power that was not proportionate with the increase in drag.  Early R-2800-1 models could produce 1800 horsepower, but by war's end the best models could make 2100 horsepower.  That meant a 45% increase in horsepower relative to the frontal area of the engine.  Power to weight ratios for the latest model R-1830 and R-2800 were similar, while power to displacement improved by about 14%.
      By war's end Pratt and Whitney had the monstrous R-4360 in production:

      This gigantic engine had four rows of radially-arranged pistons.  Compared to the R-2800 it produced about 50% more power for less than 10% more frontal area.  Again, power to weight and power to displacement showed more modest improvements.  The greatest gains were from increasing thrust with very little increase in drag.  All of this was very hard for the engineers, who had to figure out how to make crankshafts and reduction gear that could handle that much power without breaking, and also how to get enough cooling air through a giant stack of cylinders.

      Attempts at boosting the thrust of fighters with auxiliary power sources like rockets and ramjets were tried, but were not successful.

      Yes, that is a biplane with retractable landing gear and auxiliary ramjets under the wings.  Cocaine is a hell of a drug.

      A secondary source of improvement in thrust came from the development of better propellers.  Most of the improvement came just before WWII broke out, and by the time the war broke out, most aircraft had constant-speed propellers.

      For optimal performance, the angle of attack of the propeller blades must be matched to the ratio of the forward speed of the aircraft to the circular velocity of the propeller tips.  To cope with the changing requirements, constant speed or variable pitch propellers were invented that could adjust the angle of attack of the propeller blades relative to the hub.

      There was also improvement in using exhaust from the engine and the waste heat from the engine to increase thrust.  Fairly early on, designers learned that the enormous amount of exhaust produced by the engine could be directed backwards to generate thrust.  Exhaust stacks were designed to work as nozzles to harvest this small source of additional thrust:

      The exhaust stacks of the Merlin engine in a Spitfire act like jet nozzles

      A few aircraft also used the waste heat being rejected by the radiator to produce a small amount of additional thrust.  The Meredith Effect radiator on the P-51 is the best-known example:

      Excess heat from the engine was radiated into the moving airstream that flowed through the radiator.  The heat would expand the air, and the radiator was designed to use this expansion and turn it into acceleration.  In essence, the radiator of the P-51 worked like a very weak ramjet.  By the most optimistic projections the additional thrust from the radiator would cancel out the drag of the radiator at maximum velocity.  So, it may not have provided net thrust, but it did still provide thrust, and every bit of thrust mattered.
      For the most part, achieving specific design objectives in WWII fighters was a function of minimizing weight, maximizing lift, minimizing drag and maximizing thrust.  But doing this in a satisfactory way usually meant emphasizing certain performance goals at the expense of others.
      Top Speed, Dive Speed and Acceleration
      During the 1920s and 1930s, the lack of any serious air to air combat allowed a number of crank theories on fighter design to develop and flourish.  These included the turreted fighter:

      The heavy fighter:

      And fighters that placed far too much emphasis on turn rate at the expense of everything else:

      But it quickly became clear, from combat in the Spanish Civil War, China, and early WWII, that going fast was where it was at.  In a fight between an aircraft that was fast and an aircraft that was maneuverable, the maneuverable aircraft could twist and pirouette in order to force the situation to their advantage, while the fast aircraft could just GTFO the second that the situation started to sour.  In fact, this situation would prevail until the early jet age when the massive increase in drag from supersonic flight made going faster difficult, and the development of heat-seeking missiles made it dangerous to run from a fight with jet nozzles pointed towards the enemy.
      The top speed of an aircraft is the speed at which drag and thrust balance each other out, and the aircraft stops accelerating.  Maximizing top speed means minimizing drag and maximizing thrust.  The heavy fighters had a major, inherent disadvantage in terms of top speed.  This is because twin engined prop fighters have three big lumps contributing to frontal area; two engines and the fuselage.  A single engine fighter only has the engine, with the rest of the fuselage tucked neatly behind it.  The turret fighter isn't as bad; the turret contributes some additional drag, but not as much as the twin-engine design does.  It does, however, add quite a bit of weight, which cripples acceleration even if it has a smaller effect on top speed.  Early-war Japanese and Italian fighters were designed with dogfight  performance above all other considerations, which meant that they had large wings to generate large turning forces, and often had open cockpits for the best possible visibility.  Both of these features added drag, and left these aircraft too slow to compete against aircraft that sacrificed some maneuverability for pure speed.

      Drag force rises roughly as a square function of airspeed (throw this formula out the window when you reach speeds near the speed of sound).  Power is equal to force times distance over time, or force times velocity.  So, power consumed by drag will be equal to drag coefficient times frontal area times airspeed squared times airspeed.  So, the power required for a given maximum airspeed will be a roughly cubic function.  And that is assuming that the efficiency of the propeller remains constant!
      Acceleration is (thrust-drag)/weight.  It is possible to have an aircraft that has a high maximum speed, but quite poor acceleration and vice versa.  Indeed, the A6M5 zero had a somewhat better power to weight ratio than the F6F5 Hellcat, but a considerably lower top speed.  In a drag race the A6M5 would initially pull ahead, but it would be gradually overtaken by the Hellcat, which would eventually get to speeds that the zero simply could not match.

      Maximum dive speed is also a function of drag and thrust, but it's a bit different because the weight of the aircraft times the sine of the dive angle also counts towards thrust.  In general this meant that large fighters dove better.  Drag scales with the frontal area, which is a square function of size.  Weight scales with volume (assuming constant density), which is a cubic function of size.  Big American fighters like the P-47 and F4U dove much faster than their Axis opponents, and could pick up speed that their opponents could not hope to match in a dive.

      A number of US fighters dove so quickly that they had problems with localized supersonic airflow.  Supersonic airflow was very poorly understood at the time, and many pilots died before somewhat improvisational solutions like dive brakes were added.

      Ranking US ace Richard Bong takes a look at the dive brakes of a P-38

      Acceleration, top speed and dive speed are all improved by reducing drag, so every conceivable trick for reducing parasitic drag was tried.

      The Lockheed P-38 used flush rivets on most surfaces as well as extensive butt welds to produce the smoothest possible flight surfaces.  This did reduce drag, but it also contributed to the great cost of the P-38.

      The Bf 109 was experimentally flown with a V-tail to reduce drag.  V-tails have lower interference drag than conventional tails, but the modification was found to compromise handling during takeoff and landing too much and was not deemed worth the small amount of additional speed.

      The YAK-3 was coated with a layer of hard wax to smooth out the wooden surface and reduce drag.  This simple improvement actually increased top speed by a small, but measurable amount!  In addition, the largely wooden structure of the aircraft had few rivets, which meant even less drag.

      The Donier DO-335 was a novel approach to solving the problem of drag in twin-engine fighters.  The two engines were placed at the front and rear of the aircraft, driving a pusher and a tractor propeller.  This unconventional configuration led to some interesting problems, and the war ended before these could be solved.

      The J2M Raiden had a long engine cowling that extended several feet forward in front of the engine.  This tapered engine cowling housed an engine-driven fan for cooling air as well as a long extension shaft of the engine to drive the propeller.  This did reduce drag, but at the expense of lengthening the nose and so reducing pilot visibility, and also moving the center of gravity rearward relative to the center of lift.
      Designers were already stuffing the most powerful engines coming out of factories into aircraft, provided that they were reasonably reliable (and sometimes not even then).  After that, the most expedient solution to improve speed was to sacrifice lift to reduce drag and make the wings smaller.  The reduction in agility at low speeds was generally worth it, and at higher speeds relatively small wings could produce satisfactory maneuverability since lift is a square function of velocity.  Alternatively, so-called laminar flow airfoils (they weren't actually laminar flow) were substituted, which produced less drag but also less lift.  

      The Bell P-63 had very similar aerodynamics to the P-39 and nearly the same engine, but was some 80 KPH faster thanks to the new laminar flow airfoils.  However, the landing speed also increased by about 40 KPH, largely sacrificing the benevolent landing characteristics that P-39 pilots loved.

      The biggest problem with reducing the lift of the wings to increase speed was that it made takeoff and landing difficult.  Aircraft with less lift need to get to higher speeds to generate enough lift to take off, and need to land at higher speeds as well.  As the war progressed, fighter aircraft generally became trickier to fly, and the butcher's bill of pilots lost in accidents and training was enormous.
      Turn Rate
      Sometimes things didn't go as planned.  A fighter might be ambushed, or an ambush could go wrong, and the fighter would need to turn, turn, turn.  It might need to turn to get into a position to attack, or it might need to turn to evade an attack.

      Aircraft in combat turn with their wings, not their rudders.  This is because the wings are way, way bigger, and therefore much more effective at turning the aircraft.  The rudder is just there to make the nose do what the pilot wants it to.  The pilot rolls the aircraft until it's oriented correctly, and then begins the turn by pulling the nose up.  Pulling the nose up increases the angle of attack, which increases the lift produced by the wings.  This produces centripetal force which pulls the plane into the turn.  Since WWII aircraft don't have the benefit of computer-run fly-by-wire flight control systems, the pilot would also make small corrections with rudder and ailerons during the turn.

      But, as we saw above, making more lift means making more drag.  Therefore, when aircraft turn they tend to slow down unless the pilot guns the throttle.  Long after WWII, Col. John Boyd (PBUH) codified the relationship between drag, thrust, lift and weight as it relates to aircraft turning performance into an elegant mathematical model called energy-maneuverability theory, which also allowed for charts that depict these relationships.

      Normally, I would gush about how wonderful E-M theory is, but as it turns out there's an actual aerospace engineer named John Golan who has already written a much better explanation than I would likely manage, so I'll just link that.  And steal his diagram:

      E-M charts are often called "doghouse plots" because of the shape they trace out.  An E-M chart specifies the turning maneuverability of a given aircraft with a given amount of fuel and weapons at a particular altitude.  Turn rate is on the Y axis and airspeed is on the X axis.  The aircraft is capable of flying in any condition within the dotted line, although not necessarily continuously.  The aircraft is capable of flying continuously anywhere within the dotted line and under the solid line until it runs out of fuel.

      The aircraft cannot fly to the left of the doghouse because it cannot produce enough lift at such a slow speed to stay in the air.  Eventually it will run out of sky and hit the ground.  The curved, right-side "roof" of the doghouse is actually a continuous quadratic curve that represents centrifugal force.  The aircraft cannot fly outside of this curve or it or the pilot will break from G forces.  Finally, the rightmost, vertical side of the doghouse is the maximum speed that the aircraft can fly at; either it doesn't have the thrust to fly faster, or something breaks if the pilot should try.  The peak of the "roof" of the doghouse represents the aircraft's ideal airspeed for maximum turn rate.  This is usually called the "corner velocity" of the aircraft.

      So, let's look at some actual (ish) EM charts:


      Now, these are taken from a flight simulator, but they're accurate enough to illustrate the point.  They're also a little busier than the example above, but still easy enough to understand.  The gray plot overlaid on the chart consists of G-force (the curves) and turn radius (the straight lines radiating from the graph origin).  The green doghouse shows the aircraft's performance with flaps.  The red curve shows the maximum sustained turn rate.  You may notice that the red line terminates on the X axis at a surprisingly low top speed; that's because these charts were made for a very low altitude confrontation, and their maximum level top could only be achieved at higher altitudes.  These aircraft could fly faster than the limits of the red line show, but only if they picked up extra speed from a dive.  These charts could also be overlaid on each other for comparison, but in this case that would be like a graphic designer vomiting all over the screen, or a Studio Killers music video.

      From these charts, we can conclude that at low altitude the P-51D enjoys many advantages over the Bf 109G-6.  It has a higher top speed at this altitude, 350-something vs 320-something MPH.  However, the P-51 has a lower corner speed.  In general, the P-51's flight envelope at this altitude is just bigger.  But that doesn't mean that the Bf 109 doesn't have a few tricks.  As you can see, it enjoys a better sustained turn rate from about 175 to 325 MPH.  Between those speed bands, the 109 will be able to hold on to its energy better than the pony provided it uses only moderate turns.

      During turning flight, our old problem induced drag comes back to haunt fighter designers.  The induced drag equation is Cdi = (Cl^2) / (pi * AR * e).  Where Cdi is the induced drag coefficient, Cl is the lift coefficient, pi is the irrational constant pi, AR is aspect ratio, or wingspan squared divided by wing area, and e is not the irrational constant e but an efficiency factor.

      There are a few things of interest here.  For starters, induced drag increases with the square of the lift coefficient.  Lift coefficient increases more or less linearly (see above) with angle of attack.  There are various tricks for increasing wing lift nonlinearly, as well as various tricks for generating lift with surfaces other than the wings, but in WWII, designers really didn't use these much.  So, for all intents and purposes, the induced drag coefficient will increase with the square of angle of attack, and for a given airspeed, induced drag will increase with the square of the number of Gs the aircraft is pulling.  Since this is a square function, it can outrun other, linear functions easily, so minimizing the effect of induced drag is a major consideration in improving the sustained turn performance of a fighter.

      To maximize turn rate in a fighter, designers needed to make the fighter as light as possible, make the engine as powerful as possible, make the wings have as much area as possible, make the wings as long and skinny as possible, and to use the most efficient possible wing shape.

      You probably noticed that two of these requirements, make the plane as light as possible and make the wings as large as possible, directly contradict the requirements of good dive performance.  There is simply no way to reconcile them; the designers either needed to choose one, the other, or come to an intermediate compromise.  There was no way to have both great turning performance and great diving performance.

      Since the designers could generally be assumed to have reduced weight to the maximum possible extent and put the most powerful engine available into the aircraft, that left the design of the wings.

      The larger the wings, the more lift they generate at a given angle of attack.  The lower the angle of attack, the less induced drag.  The bigger wings would add more drag in level flight and reduce top speed, but they would actually reduce drag during maneuvering flight and improve sustained turn rate.  A rough estimate of the turning performance of the aircraft can be made by dividing the weight of the aircraft over its wing area.  This is called wing loading, and people who ought to know better put far too much emphasis on it.  If you have E-M charts, you don't need wing loading.  However, E-M charts require quite a bit of aerodynamic data to calculate, while wing loading is much simpler.
      Giving the wings a higher aspect ratio would also improve turn performance, but the designers hands were somewhat tied in this respect.  The wings usually stored the landing gear and often the armament of the fighter.  In addition the wings generated the lift, and making the wings too long and skinny would make them too structurally flimsy to support the aircraft in maneuvering flight.  That is, unless they were extensively reinforced, which would add weight and completely defeat the purpose.  So, designers were practically limited in how much they could vary the aspect ratio of fighter wings.

      The wing planform has significant effect on the efficiency factor e.  The ideal shape to reduce induced drag is the "elliptical" (actually two half ellipses) wing shape used on the Supermarine spitfire.

      This wing shape was, however, difficult to manufacture.  By the end of the war, engineers had come up with several wing planforms that were nearly as efficient as the elliptical wing, but were much easier to manufacture.

      Another way to reduce induced drag is to slightly twist the wings of the aircraft so that the wing tips point down.

      This is called washout.  The main purpose of washout was to improve the responsiveness of the ailerons during hard maneuvering, but it could give small efficiency improvements as well.  Washout obviously complicates the manufacture of the wing, and thus it wasn't that common in WWII, although the TA-152 notably did have three degrees of tip washout.

      The Bf 109 had leading edge slats that would deploy automatically at high angles of attack.  Again, the main intent of these devices was to improve the control of the aircraft during takeoff and landing and hard maneuvering, but they did slightly improve the maximum angle of attack the wing could be flown at, and therefore the maximum instantaneous turn rate of the aircraft.  The downside of the slats was that they weakened the wing structure and precluded the placement of guns inside the wing.

      leading edge slats of a Bf 109 in the extended position

      One way to attempt to reconcile the conflicting requirements of high speed and good turning capability was the "butterfly" flaps seen on Japanese Nakajima fighters.

      This model of a Ki-43 shows the location of the butterfly flaps; on the underside of the wings, near the roots

      These flaps would extend during combat, in the case of later Nakajima fighters, automatically, to increase wing area and lift.  During level and high speed flight they would retract to reduce drag.  Again, this would mainly improve handling on the left hand side of the doghouse, and would improve instantaneous turn rate but do very little for sustained turn rate.
      In general, turn performance was sacrificed in WWII for more speed, as the two were difficult to reconcile.  There were a small number of tricks known to engineers at the time that could improve instantaneous turn rate on fast aircraft with high wing loading, but these tricks were inadequate to the task of designing an aircraft that was very fast and also very maneuverable.  Designers tended to settle for as fast as possible while still possessing decent turning performance.
      Climb Rate
      Climb rate was most important for interceptor aircraft tasked with quickly getting to the level of intruding enemy aircraft.  When an aircraft climbs it gains potential energy, which means it needs spare available power.  The specific excess power of an aircraft is equal to V/W(T-D) where V is airspeed, W is weight, T is thrust and D is drag.  Note that lift isn't anywhere in this equation!  Provided that the plane has adequate lift to stay in the air and its wings are reasonably efficient at generating lift so that the D term doesn't get too high, a plane with stubby wings can be quite the climber!

      The Mitsubishi J2M Raiden is an excellent example of what a fighter optimized for climb rate looked like.

      A captured J2M in the US during testing

      The J2M had a very aerodynamically clean design, somewhat at the expense of pilot visibility and decidedly at the expense of turn rate.  The airframe was comparatively light, somewhat at the expense of firepower and at great expense to fuel capacity.  Surprisingly for a Japanese aircraft, there was some pilot armor.  The engine was, naturally, the most powerful available at the time.  The wings, in addition to being somewhat small by Japanese standards, had laminar-flow airfoils that sacrificed maximum lift for lower drag.

      The end result was an aircraft that was the polar opposite of the comparatively slow, long-ranged and agile A6M zero-sen fighters that IJN pilots were used to!  But it certainly worked.  The J2M was one of the fastest-climbing piston engine aircraft of the war, comparable to the F8F Bearcat.

      The design requirements for climb rate were practically the same as the design requirements for acceleration, and could generally be reconciled with the design requirements for dive performance and top speed.  The design requirements for turn rate were very difficult to reconcile with the design requirements for climb rate.
      Roll Rate
      In maneuvering combat aircraft roll to the desired orientation and then pitch.  The ability to roll quickly allows the fighter to transition between turns faster, giving it an edge in maneuvering combat.

      Aircraft roll with their ailerons by making one wing generate more lift while the other wing generates less lift.

      The physics from there are the same for any other rotating object.  Rolling acceleration is a function of the amount of torque that the ailerons can provide divided by the moment of inertia of the aircraft about the roll axis.  So, to improve roll rate, a fighter needs the lowest possible moment of inertia and the highest possible torque from its ailerons.

      The FW-190A was the fighter best optimized for roll rate.  Kurt Tank's design team did everything right when it came to maximizing roll rate.

      The FW-190 could out-roll nearly every other piston fighter

      The FW-190 has the majority of its mass near the center of the aircraft.  The fuel is all stored in the fuselage and the guns are located either above the engine or in the roots of the wings.  Later versions added more guns, but these were placed just outside of the propeller arc.

      Twin engined fighters suffered badly in roll rate in part because the engines had to be placed far from the centerline of the aircraft.  Fighters with armament far out in the wings also suffered.

      The ailerons were very large relative to the size of the wing.  This meant that they could generate a lot of torque.  Normally, large ailerons were a problem for pilots to deflect.  Most World War Two fighters did not have any hydraulic assistance; controls needed to be deflected with muscle power alone, and large controls could encounter too much wind resistance for the pilots to muscle through at high speed.

      The FW-190 overcame this in two ways.  The first was that, compared to the Bf 109, the cockpit was decently roomy.  Not as roomy as a P-47, of course, but still a vast improvement.  Cockpit space in World War Two fighters wasn't just a matter of comfort.  The pilots needed elbow room in the cockpit in order to wrestle with the control stick.  The FW-190 also used controls that were actuated by solid rods rather than by cables.  This meant that there was less give in the system, since cables aren't completely rigid.

      Additionally, the FW-190 used Frise ailerons, which have a protruding tip that bites into the wind and reduces the necessary control forces:

      Several US Navy fighters, like later models of F6F and F4U used spring-loaded aileron tabs, which accomplished something similar by different means:

      In these designs a spring would assist in pulling the aileron one way, and a small tab on the aileron the opposite way in order to aerodynamically move the aileron.  This helped reduce the force necessary to move the ailerons at high speeds.

      Another, somewhat less obvious requirement for good roll rate in fighters was that the wings be as rigid as possible.  At high speeds, the force of the ailerons deflecting would tend to twist the wings of the aircraft in the opposite direction.  Essentially, the ailerons began to act like servo tabs.  This meant that the roll rate would begin to suffer at high speeds, and at very high speeds the aircraft might actually roll in the opposite direction of the pilot's input.

      The FW-190s wings were extremely rigid.  Wing rigidity is a function of aspect ratio and construction.

      The FW-190 had wings that had a fairly low aspect ratio, and were somewhat overbuilt.  Additionally, the wings were built as a single piece, which was a very strong and robust approach.  This had the downside that damaged wings had to be replaced as a unit, however.
      Some spitfires were modified by changing the wings from the original elliptical shape to a "clipped" planform that ended abruptly at a somewhat shorter span.  This sacrificed some turning performance, but it made the wings much stiffer and therefore improved roll rate.

      Finally, most aircraft at the beginning of the war had fabric-skinned ailerons, including many that had metal-skinned wings.  Fabric-skinned ailerons were cheaper and less prone to vibration problems than metal ones, but at high speed the shellacked surface of the fabric just wasn't air-tight enough, and a significant amount of airflow would begin going into and through the aileron.  This degraded their effectiveness greatly, and the substitution of metal surfaces helped greatly.
      Stability and Safety
      World War Two fighters were a handful.  The pressures of war meant that planes were often rushed into service without thorough testing, and there were often nasty surprises lurking in unexplored corners of the flight envelope.

      This is the P-51H.  Even though the P-51D had been in mass production for years, it still had some lingering stability issues.  The P-51H solved these by enlarging the tail.  Performance was improved by a comprehensive program of drag reduction and weight reduction through the use of thinner aluminum skin.

      The Bf 109 had a poor safety record in large part because of the narrow landing gear.  This design kept the mass well centralized, but it made landing far too difficult for inexpert pilots.

      The ammunition for the massive 37mm cannon in the P-39 and P-63 was located in the nose, and located far forward enough that depleting the ammunition significantly affected the aircraft's stability.  Once the ammunition was expended, it was much more likely that the aircraft could enter dangerous spins.

      The cockpit of the FW-190, while roomier than the Bf 109, had terrible forward visibility.  The pilot could see to the sides and rear well enough, but a combination of a relatively wide radial engine and a hump on top of the engine cowling to house the synchronized machine guns meant that the pilot could see very little.  This could be dangerous while taxiing on the ground.

    • By Collimatrix
      This is a 737-200.  It has two JT8D turbofan engines that live happily in pods underneath the wings, guzzling down air and Jet-A.

      This is an ME-262.  It has two Jumo 004 engines that live... not exactly happily in pods under the wings, guzzling down air and whatever the Nazis had that was flammable.

      This is an F-14A of VF-84 "Jolly Rogers."  It has two TF30 low bypass turbofans that sit at the end of long inlets with three variable-geometry shock ramps, a variable-position spill door and a boundary layer diverter per engine.  These elements are computer-controlled to optimize pressure recovery, oblique shock wave location, minimize spillage drag and keep flow distortion to a minimum.

      Air intake design in combat aircraft turns out to be extremely complicated.
      Unlike an airliner, which is expected to cruise at subsonic speeds all the time, and unlike a wunderwaffe, which is expected to vaguely work enough so that the Americans give you a cushy technical consultant's job after the war instead of leaving you for the Russians, a modern fighter air intake has to work well at subsonic speeds, at supersonic speeds, when the fighter is maneuvering, it must deliver undistorted air to the engines, and it must be as light and offer as little drag and other aerodynamic disruptions as possible.  Oh yeah, and nowadays it should contribute as little as possible to radar cross section.  Have fun!
      For good subsonic performance, the air intake has to produce smooth, gradual transitions in flow as it is decelerated and finally fed into the engine.  This produces a decrease in dynamic pressure and a corresponding rise in static pressure.  A relatively simple and light inlet design can do this well.
      For supersonic flight, things get more complicated.  The air must be decelerated to subsonic velocity by a shock wave, or, ideally, by a series of shocks.  The exact position and angle of the shock waves changes with mach number, so for very best efficiency, the intake requires some sort of variable geometry.
      The first supersonic fighters used nose-mounted intakes.


      In a number of designs, there were central shock-producing spikes that also doubled as radar mounts:


      In these designs the shock cone could translate forwards and backwards some amount to optimize shock location.
      However, as radar became more and more important to air combat, shock-cone mounted radars ceased to be large enough to fit the wide, powerful radar sets that designers wanted.  The air intakes were moved to the sides and bottom of the aircraft.

      This Q-5 is a particularly good example because the design was originally based one that had a nose-mounted intake (the J-6/MiG-19).
      Putting the intakes on the sides does get them out of the way, but it causes another problem.  Airflow moving over the surface of the fuselage develops a turbulent boundary layer, and ingesting this turbulent boundary layer into the engines causes problems in the compressors.  Aircraft with intakes mounted next to the fuselage, therefore, require some means of keeping the boundary layer air from getting into the engines.  Usually this is accomplished by having a slight offset and a splitter plate:

      However, there are other means of boundary layer management.  The JSF and the new Chinese fighter designs use diverterless supersonic inlets:

      In these a bump in front of the inlet deflects the boundary layer away from the engine intake using sorcery advanced fluid dynamics.  This system is lighter, and probably allows better stealth than traditional inlet designs.
      Fighters must be able to maneuver, sometimes violently, and this can affect airflow into the engines.  Placing the air intakes underneath the fuselage, or underneath the wings helps the situation at high angles of attack, as the fuselage or wing helps deflect the airflow towards the intakes:

      The intake location of the F-16:

      and also the MiG-29:

      Take advantage of this fact.
      Finally, air intakes are potentially large sources of radar returns, so on modern designs they have to be tailored to minimize this problem.  One of the biggest ways to do this is to hide the engine's compressor blades from the front, as large, whirling pieces of metal are very good radar reflectors:

      As you can see, the compressor face of the engine in the YF-23 is almost completely hidden.  You can also see that the inlet duct avoids right angles that would act as retroreflectors, and that it has an unusual boundary layer management system.

      There is a lot more ground to cover, but these are the basics of how combat aircraft air intakes work, and why they look the way they look.
    • By Collimatrix
      One of the frustrations of being a child and reading lots and lots of books on combat aircraft was that there would be impressive-sounding technical terms bandied about, but no explanations.  Or if there were explanations I didn't understand them because I was a child.
      One of the terms that got thrown around a lot was "relaxed stability" or "artificial stability" or even "instability," and this was given as one of the reasons for the F-16's superiority.  Naturally, an explanation of what on earth this was was not forthcoming, but it had something to do with making the F-16 more maneuverable.
      This is partially true, but relaxed stability doesn't just make a plane more maneuverable.  It makes a plane better in general.
      Why is this so?  Let's look at a schematic of a typical aircraft:

      There are two points of interest here; the center of lift (CL) and center of gravity (CG).  The CL is the net point through which all aerodynamic forces acting on the aircraft pass.  Various things can cause the CL to shift around in flight, such as the wing stalling or the transition to supersonic flight, but we'll ignore that for now.
      The CG is the net center of mass of the aircraft.  The downward force of the weight of the aircraft will act through this point, and the aircraft will rotate around this point.
      The reason that this configuration is stable is that the amount of lift a wing generates is a function of its angle of attack (AOA, or alpha).  AOA is the angle of the moving air relative to the wing.  If the wing is more inclined relative to the air, it generates more lift up until it starts to stall.  The relationship looks like this:

      Obviously this depends on the exact shape of the wing and the airspeed, but you get the idea.  The lift increases as alpha goes up, but falls off after the wing stalls.
      This means that in a conventionally stable aircraft in level flight, anything that causes the nose to pitch up will cause the amount of lift to increase, but because the CL is behind the CG, this increased lift will cause a torque on the aircraft that will rotate the nose back down again.  Thus, any disturbances in pitch are self-correcting.  This is important because it means that a human being can fly the aircraft.  If random disturbances were substantially self-magnifying, the plane would begin to tumble through the air.
      There's a bit of a problem though.  Because the CL is behind the CG, the plane has a tendency to rotate downwards.  So, to keep the plane level the tail has to apply a torque to trim out this tendency to rotate.  The torque that the tail is applying is pushing downward, which means that it's cancelling out part of the lift!  Keeping the tail deflected also increases drag.
      These problems would go away if the arrangement were reversed, with the CG behind the CL:

      However, this would make the plane unflyable for a human.  But this is the 21st century; we have better than humans.  We have computers.
      A computer (actually, an at-least-triply-redundant set of computers) and an accelerometer detect and cancel out any divergences in pitch faster and more tirelessly than a human ever could.  The tail downforce becomes tail upforce.  Also (contrary to wikipedia's shitty diagrams), the distance between the CG and CL is closer on unstable designs, so the trim drag of the tail is smaller too.
      OK, so unstable designs get a slight reduction in drag and a slight increase in lift.  Why is that a big deal?
      Think of a plane as a set of compromises flying in close formation.  Everything in aerodynamics comes at a cost.  Let's take a look at how this principle can kneecap people trying to be clever.
      The quicker of you will have no doubt objected to my characterization of stable aircraft losing lift due to tailplane downforce.  "But that doesn't apply if the plane is a canard design!  The CG will be in front of the CL, but still behind the canards, so the canards will generate an upforce to trim the plane out!  No need for fancy computer-flown planes here!"
      Yeah, they tried that.  But the need for CG/CL relationships ends up screwing you anyhow.  Let's look at a stable canard design (and one of my favorite aircraft), the J7W1 Shinden:

      Note that the wings are swept.  Now, this is a prop-driven plane, so I can guaran-fucking-tee you that the wings aren't swept to increase critical mach number (I don't think the designers even knew about critical mach number at the time).  Instead, the wings are swept for two reasons:
      1)  To move the CL back so that the plane is stable
      2)  to move the rudders back so that they're far enough behind the CG that they'll have adequate control authority.
      There are lots of reasons you don't want swept wings on a prop fighter.  Since the thing is never going to go fast enough to encounter the benefits of them, in fact, the swept wings are almost entirely a negative.  They reduce flap effectiveness and have goony stall characteristics.  If you could get away with not having them, you would.
      But you can't.  You can't because it's 1945 and the computers are huge and unreliable.  Your clever dual-lifting-surface canard design's advantages are heavily watered down by the disadvantages imposed by the need for stability.
      That is the big advantage of instability.  The designer has a lot more freedom because there's one less thing they have to worry about.  This can indirectly lead to huge improvements.  Compare a mirage 3 and a mirage 2000.  The mirage 2000 is unstable, which adds some extra lift (nice, especially on takeoff where deltas really hurt for lift), but more than that it allows the designer to move the wings further forward on the fuselage, which allows for better aft-body streamlining and better area ruling.  Instability doesn't allow for better area ruling per se, but it frees the designer enough that the could potentially opt for that.
  • Create New...