Search the Community
Showing results for tags 'aircraft design'.
Found 3 results
One of the frustrations of being a child and reading lots and lots of books on combat aircraft was that there would be impressive-sounding technical terms bandied about, but no explanations. Or if there were explanations I didn't understand them because I was a child. One of the terms that got thrown around a lot was "relaxed stability" or "artificial stability" or even "instability," and this was given as one of the reasons for the F-16's superiority. Naturally, an explanation of what on earth this was was not forthcoming, but it had something to do with making the F-16 more maneuverable. This is partially true, but relaxed stability doesn't just make a plane more maneuverable. It makes a plane better in general. Why is this so? Let's look at a schematic of a typical aircraft: There are two points of interest here; the center of lift (CL) and center of gravity (CG). The CL is the net point through which all aerodynamic forces acting on the aircraft pass. Various things can cause the CL to shift around in flight, such as the wing stalling or the transition to supersonic flight, but we'll ignore that for now. The CG is the net center of mass of the aircraft. The downward force of the weight of the aircraft will act through this point, and the aircraft will rotate around this point. The reason that this configuration is stable is that the amount of lift a wing generates is a function of its angle of attack (AOA, or alpha). AOA is the angle of the moving air relative to the wing. If the wing is more inclined relative to the air, it generates more lift up until it starts to stall. The relationship looks like this: Obviously this depends on the exact shape of the wing and the airspeed, but you get the idea. The lift increases as alpha goes up, but falls off after the wing stalls. This means that in a conventionally stable aircraft in level flight, anything that causes the nose to pitch up will cause the amount of lift to increase, but because the CL is behind the CG, this increased lift will cause a torque on the aircraft that will rotate the nose back down again. Thus, any disturbances in pitch are self-correcting. This is important because it means that a human being can fly the aircraft. If random disturbances were substantially self-magnifying, the plane would begin to tumble through the air. There's a bit of a problem though. Because the CL is behind the CG, the plane has a tendency to rotate downwards. So, to keep the plane level the tail has to apply a torque to trim out this tendency to rotate. The torque that the tail is applying is pushing downward, which means that it's cancelling out part of the lift! Keeping the tail deflected also increases drag. These problems would go away if the arrangement were reversed, with the CG behind the CL: However, this would make the plane unflyable for a human. But this is the 21st century; we have better than humans. We have computers. A computer (actually, an at-least-triply-redundant set of computers) and an accelerometer detect and cancel out any divergences in pitch faster and more tirelessly than a human ever could. The tail downforce becomes tail upforce. Also (contrary to wikipedia's shitty diagrams), the distance between the CG and CL is closer on unstable designs, so the trim drag of the tail is smaller too. OK, so unstable designs get a slight reduction in drag and a slight increase in lift. Why is that a big deal? Think of a plane as a set of compromises flying in close formation. Everything in aerodynamics comes at a cost. Let's take a look at how this principle can kneecap people trying to be clever. The quicker of you will have no doubt objected to my characterization of stable aircraft losing lift due to tailplane downforce. "But that doesn't apply if the plane is a canard design! The CG will be in front of the CL, but still behind the canards, so the canards will generate an upforce to trim the plane out! No need for fancy computer-flown planes here!" Yeah, they tried that. But the need for CG/CL relationships ends up screwing you anyhow. Let's look at a stable canard design (and one of my favorite aircraft), the J7W1 Shinden: Note that the wings are swept. Now, this is a prop-driven plane, so I can guaran-fucking-tee you that the wings aren't swept to increase critical mach number (I don't think the designers even knew about critical mach number at the time). Instead, the wings are swept for two reasons: 1) To move the CL back so that the plane is stable 2) to move the rudders back so that they're far enough behind the CG that they'll have adequate control authority. There are lots of reasons you don't want swept wings on a prop fighter. Since the thing is never going to go fast enough to encounter the benefits of them, in fact, the swept wings are almost entirely a negative. They reduce flap effectiveness and have goony stall characteristics. If you could get away with not having them, you would. But you can't. You can't because it's 1945 and the computers are huge and unreliable. Your clever dual-lifting-surface canard design's advantages are heavily watered down by the disadvantages imposed by the need for stability. That is the big advantage of instability. The designer has a lot more freedom because there's one less thing they have to worry about. This can indirectly lead to huge improvements. Compare a mirage 3 and a mirage 2000. The mirage 2000 is unstable, which adds some extra lift (nice, especially on takeoff where deltas really hurt for lift), but more than that it allows the designer to move the wings further forward on the fuselage, which allows for better aft-body streamlining and better area ruling. Instability doesn't allow for better area ruling per se, but it frees the designer enough that the could potentially opt for that.
Every so often someone asks a question about the advantages of forward-swept wings, and usually they get a shitty half-assed answer about how they somehow improve maneuverability and stuff. I will attempt to provide a fully-assed answer. The short version is that forward swept wings do roughly the same thing as conventional aft swept wings; they increase critical mach number. I found an excellent video explaining transonic effects, so watch that first if you don't already know what that is. Typically, a straight wing starts experiencing shock wave buildup at around mach .7. These effects are generally bad; control surfaces lose effectiveness, the aircraft's center of lift moves, stability can decrease, and drag greatly increases. It's generally desirable to delay the onset of this badness. The critical mach number is strongly affected by the thickness to chord ratio: So, critical mach number could be increased by having really thin wings. The F-104 does this, but at the expense of having ridiculously tiny wings that generate barely any lift and no internal volume for fuel storage. Critical mach number could also be delayed by having wings with a normal thickness, but very long chord. This would improve the supersonic performance of the wing, but subsonic drag would be negatively affected, because the wing would have a large amount of induced drag, and additional wetted area that would cause more drag. Finally, the wing could be swept. This would increase the chord length relative to the airflow, but would not give the wing undue surface area and thus subsonic drag. In theory, the critical mach number could be increased by a factor equal to the inverse of the cosine of the sweep angle (much like calculating the LOS thickness of tank armor, and for the same reason), but secondary effects mean that it's less effective than this. The practical effect of sweep on drag coefficient looks about like this: (from Design for Air Combat) This, incidentally, is why the ME-262 doesn't really have swept wings. The change in Mcr is basically negligible for any leading edge sweep under thirty degrees. Note that this logic applies whether the wings are swept forwards or backwards; as far as delaying and reducing the transonic effects, forward or rearward sweep should be equally effective. There are some secondary effects that make forward-swept wings more desirable. One of these is spanwise flow: In any swept wing, the air isn't just flowing over the wing, it's flowing across them as well. This means that while pulling Gs the tips of the wings will stall first. Since the tips aren't producing lift anymore, but the rest of the wing is, the center of lift of the wing moves forward, which means that there's more pitch-up torque on the plane, which means that the nose goes up even more and the stall gets worse. This is known as the "sabre dance," as the F-100 displayed this undesirable property. With the wings forward swept, the root of the wings would stall first (although in practice, forward swept wing aircraft tend to have the wings attached well aft, so the CL still shifts forward during a stall) To make matters worse, the air spilling out sideways and the early stall interfere with the effectiveness of the ailerons, which means that the aircraft can lose roll control effectiveness as it increases AOA. This is a particularly alarming behavior during landing, as speed is low, AOA is high, and keeping the aircraft level is of paramount importance. Additionally, the air spilling out outwards towards the wingtips reduces lift. Reducing this bad behavior increases lift coefficient, therefore. So, forward swept wings are a little more efficient, aerodynamically than aft swept wings. Why aren't they more popular? The problem is something called aeroelastic divergence; which is engineer-speak for "the goddamn wings try to tear themselves off." I will attempt to illustrate with the finest MS pain diagrams: The amount of lift that a wing generates is a function of the angle of attack. The wing will generate more lift the more inclined it is relative to the airflow. Wings in the real world are, of course, not perfectly rigid, so when they generate lift in order to pull the weight of the fuselage through the sky, they bend slightly. In swept wings, the wings aren't just bending, they're twisting as well because the center of lift is not aligned with the structural connection between the fuselage. In an aft-swept wing, the force of the lift tends to twist the wings downwards. Increasing the angle of attack will increase the lift, which will increase this downward twist, which is a naturally self-limiting (negative feedback) arrangement. In a forward-swept wing, it's exactly the opposite. When the angle of attack increases, lift increases and the wings twist themselves upwards, which increases lift even more which increases the twisting... This is why forward-swept wings had to wait until magical composites with magical properties were available.
Collimatrix posted a topic in AerospaceThis is a 737-200. It has two JT8D turbofan engines that live happily in pods underneath the wings, guzzling down air and Jet-A. This is an ME-262. It has two Jumo 004 engines that live... not exactly happily in pods under the wings, guzzling down air and whatever the Nazis had that was flammable. This is an F-14A of VF-84 "Jolly Rogers." It has two TF30 low bypass turbofans that sit at the end of long inlets with three variable-geometry shock ramps, a variable-position spill door and a boundary layer diverter per engine. These elements are computer-controlled to optimize pressure recovery, oblique shock wave location, minimize spillage drag and keep flow distortion to a minimum. Air intake design in combat aircraft turns out to be extremely complicated. Unlike an airliner, which is expected to cruise at subsonic speeds all the time, and unlike a wunderwaffe, which is expected to vaguely work enough so that the Americans give you a cushy technical consultant's job after the war instead of leaving you for the Russians, a modern fighter air intake has to work well at subsonic speeds, at supersonic speeds, when the fighter is maneuvering, it must deliver undistorted air to the engines, and it must be as light and offer as little drag and other aerodynamic disruptions as possible. Oh yeah, and nowadays it should contribute as little as possible to radar cross section. Have fun! For good subsonic performance, the air intake has to produce smooth, gradual transitions in flow as it is decelerated and finally fed into the engine. This produces a decrease in dynamic pressure and a corresponding rise in static pressure. A relatively simple and light inlet design can do this well. For supersonic flight, things get more complicated. The air must be decelerated to subsonic velocity by a shock wave, or, ideally, by a series of shocks. The exact position and angle of the shock waves changes with mach number, so for very best efficiency, the intake requires some sort of variable geometry. The first supersonic fighters used nose-mounted intakes. In a number of designs, there were central shock-producing spikes that also doubled as radar mounts: In these designs the shock cone could translate forwards and backwards some amount to optimize shock location. However, as radar became more and more important to air combat, shock-cone mounted radars ceased to be large enough to fit the wide, powerful radar sets that designers wanted. The air intakes were moved to the sides and bottom of the aircraft. This Q-5 is a particularly good example because the design was originally based one that had a nose-mounted intake (the J-6/MiG-19). Putting the intakes on the sides does get them out of the way, but it causes another problem. Airflow moving over the surface of the fuselage develops a turbulent boundary layer, and ingesting this turbulent boundary layer into the engines causes problems in the compressors. Aircraft with intakes mounted next to the fuselage, therefore, require some means of keeping the boundary layer air from getting into the engines. Usually this is accomplished by having a slight offset and a splitter plate: However, there are other means of boundary layer management. The JSF and the new Chinese fighter designs use diverterless supersonic inlets: In these a bump in front of the inlet deflects the boundary layer away from the engine intake using sorcery advanced fluid dynamics. This system is lighter, and probably allows better stealth than traditional inlet designs. Fighters must be able to maneuver, sometimes violently, and this can affect airflow into the engines. Placing the air intakes underneath the fuselage, or underneath the wings helps the situation at high angles of attack, as the fuselage or wing helps deflect the airflow towards the intakes: The intake location of the F-16: and also the MiG-29: Take advantage of this fact. Finally, air intakes are potentially large sources of radar returns, so on modern designs they have to be tailored to minimize this problem. One of the biggest ways to do this is to hide the engine's compressor blades from the front, as large, whirling pieces of metal are very good radar reflectors: As you can see, the compressor face of the engine in the YF-23 is almost completely hidden. You can also see that the inlet duct avoids right angles that would act as retroreflectors, and that it has an unusual boundary layer management system. There is a lot more ground to cover, but these are the basics of how combat aircraft air intakes work, and why they look the way they look.