Jump to content
Please support this forum by joining the SH Patreon ×
Sturgeon's House

United States Military Vehicle General: Guns, G*vins, and Gas Turbines


Tied

Recommended Posts

  • 2 weeks later...

   TV Zvezda show about BMPT. A bit from from it (24:30)

 

image.png

 

   "And not from German Panther or Tiger, but from our IS-3 Americans copied their M48".

@Sturgeon @Collimatrix @Scolopax

 

   Capitalist pigs were exposed by this totally accurate show! Explain yourself, decadent bourgeois! And stop copying our shit!

 

Link to comment
Share on other sites

3 minutes ago, alanch90 said:

@LoooSeR when i saw that part i thought it was so absolutely stupid that i was wondering if it was actually a screw up of the automatic translation of subtitles.

   Our military experts are most expert in their field! M48 is just an IS-3 with American paint over true Soviet Green camo! 

Link to comment
Share on other sites

I just saw some footage of MBT-70 and it got me thinking. Given that the only problem of that tank´s layout was the placement of the driver i in the turret, if autonomous driving becomes a mature technology in the next 10 years, it could make a lot of sense that the americans went for a design somewhat like that for the Abrams replacement. I mean, a crew of 2 (TC and gunner)+AI, all placed in the turret, that would lighten overall the tank a lot while still retaining a lot of armor for the crew and the ability for the TC to pop his head out of the turret, right?

Link to comment
Share on other sites

9 minutes ago, alanch90 said:

I just saw some footage of MBT-70 and it got me thinking. Given that the only problem of that tank´s layout was the placement of the driver i in the turret, if autonomous driving becomes a mature technology in the next 10 years, it could make a lot of sense that the americans went for a design somewhat like that for the Abrams replacement. I mean, a crew of 2 (TC and gunner)+AI, all placed in the turret, that would lighten overall the tank a lot while still retaining a lot of armor for the crew and the ability for the TC to pop his head out of the turret, right?

 

How would the commander give the tank the orders where to go without actually somewhat driving himself? How would it go through rough terrain which can not be judged by the video image (depth of water or mud for example)? I am quite skeptical about this sort of automatic driver to be honest. It's totally different than driving on the road according to the traffic signs and firmly given rules (even that is a mess at the moment, for example there are situations which don't have and can't have any correct solution by automatic systems - it's just that polititians don't like to hear that). 

Link to comment
Share on other sites

37 minutes ago, Beer said:

 

How would the commander give the tank the orders where to go without actually somewhat driving himself? How would it go through rough terrain which can not be judged by the video image (depth of water or mud for example)? I am quite skeptical about this sort of automatic driver to be honest. It's totally different than driving on the road according to the traffic signs and firmly given rules (even that is a mess at the moment, for example there are situations which don't have and can't have any correct solution by automatic systems - it's just that polititians don't like to hear that). 

Sooner than later they are going to figure it out all this thing about autonomous driving and once they do there's little incentive to keep a human driver in the tank. Let's say that the AI will be able to understand TCs verbal orders, will be able to feed from a variety of sensors and not just video footage (for example integrating laser terrain scanners which are also needed if the tank is to have active suspension). Another possibility is that the gunner may be able to double as driver for dealing for a particularly tricky obstacle while in combat the thing could drive completely autonomous. The israelis actually are fully going ahead for the replacement of the driver with AI.

I imagine that such a tank could easily weight in the low 50´s or even in the 40-50 ton range while retaining equal or better crew protection than an M1. With that weight it becomes usable in many more battlefields. The chinese are also going for a 2 man + AI (gunner) crew for the same reasons.

Link to comment
Share on other sites

Laser scanners are a common thing in autonomous driving, usually combined with cameras and radars (because each sensor catches something in certain conditions better than the other) but that won't help you not getting stuck in the first deep mud or snow you encounter. It can work on hard surface but we are still very very far away from having AI capable of getting through tough terrain. For example today there are many occassions where the crew has to come out and test the terrain before the driver goes in and even then he has to perfectly know how he wants to cross such obstacle so that he won't get stuck. This is a task I can't imagine how AI could do. How can it make its own plan to cross such piece of terrain without having this simple possiblity of the human to go out with a piece of wood and test how deep it is in which place and set its own plan where to go, where to push full throtle and where to go pretty slowly instead? Roads and hard surface are likely doable in the future but mud, water, snow? Another thing - the bushes and little trees and all that green stuff. How can the AI judge what it can simply ignore and go through and what is dangerous to hit? That's something the autonous driving technology also never needed to solve. 

 

Plus a hugely important thing - the other crew members must have absolute confidence in the driver. IMHO that won't come anytime soon. 

Link to comment
Share on other sites

My point is that eventually they are going to solve it and certainly within this decade, which kind of coincides with the approximate timeframe for the development of an Abrams replacement. I'm just wondering if, taking for granted that the tech is going to work, such a layout would be tempting enough for them to actually go for it as it combines a reduced overall tank weight with the situational awareness that they love so much (and in theory can´t get with an unmanned turret). 

Link to comment
Share on other sites

I bet that in this decade there is no self-driving MBT except for maybe very special cases used in extremely risky situations (optionally manned vehicle used without the crew in urban combat could be a possibility). But even in such cases I think that they will be rather driven by a driver sitting outside the vehicle and not by AI. 

 

Before the AI is mastered in the air I seriously can't see any chance of success on the ground. The air is waaaaay easier to solve and we are not yet in the point where AI is able of more than very simple tasks in a completely free environment. 

Link to comment
Share on other sites

16 hours ago, alanch90 said:

right?

Agreed but like Beer says, we are so far from that.  Present UGV can't tell a puddle from a river .  Water depth um-measurable remotely and guessing from context not available as yet.  Many other examples, any kind of complex terrain is just nowhere near AI doable any time soon.  Even full manned tanks get stuck in/on obstacles and slide down slopes etc etc.

Link to comment
Share on other sites

3 hours ago, DIADES said:

guessing from context not available as yet

 

That's something that machine learning might actually solve quite soon.

The trend on it is relatively recent (it only really picked up around 2016) but right now everybody is trying to apply it to every domain imaginable and trying to see where it works.

 

It does works really well on image recognition so I actually see an autonomous AI gunner being a thing before an autonomous driver which, as you said, would need to interpret more complex situation than what boils down to: Enemy military target in sights? ->Authorization to shoot? -> Shoot . There is of course the ethical issue of letting a program "pull the triger" which while it is less often the case could still happen with an AI driver. As @Beer said there are problem when there is no "correct" solution that a program could take.

 

That said, there is no reasons to think that the program couldn't take more contextual informationd in account than just what the camera on the vehicle view.

Like for example weather forecast or comparing the position of the vehicle to a map and see if there is a water body near.

The main problem in my opinion would rather be to update the contextual data as the battlefield conditions changes and feed those changes to the program in a timely and secure manner, rather than collecting them and letting the program take the decision to cross or not to cross that body of water.

Link to comment
Share on other sites

   AI gunner is much closer to us than AI driver because of 2 factors. 1st is complexity of task, and for driver it is higher because of how different terrain can be on "micro" level (around the vehicle) and on "medium" level (50-100 meters around the vehicle, hills, rocks formations, no roads, questinable condition of hill sides after rain, just to name examples). 2nd factor is current level of automatisation and AI gunner have significant advantage - at this point FCS already do a lot of things to make projectile fly into the target.

   Biggest problem for current AI development using neural networks is lack of theory behind machine learning process and lack of any sort of formulated process of analysing complex problem and comming up with specific NN AI structure to solve that problem. Now NN AI development is more of a process of trial and error, rather than a scientifically calculated development process.

   Thats why in the video posted earlier that speaks about getting 1 operator to control several UGVs is something that could happen within 5 years, but getting those UGVs into real world combat-capable state may not happen even in 10, as AI development for whole system/unit of UGVs is going to be very tricky.

Link to comment
Share on other sites

57 minutes ago, LoooSeR said:

   Biggest problem for current AI development using neural networks is lack of theory behind machine learning process and lack of any sort of formulated process of analysing complex problem and comming up with specific NN AI structure to solve that problem. Now NN AI development is more of a process of trial and error, rather than a scientifically calculated development process.

 

In a way there is no theory needed, it's basically fitting a mathematical model to the data by finding which parameters are statistically the most relevant. Some of the parameters the machine sometimes comes up with don't make any sense for us from a logical or physical point of view. It just works in a set percents of the cases and fail completely otherwise. Afaik, that's mostly what our brain does as well when inferring as solution from contextual data: "In this situation, it is highly likely that the correct answer is ..."

 

On the downside it means that the program can come up with answers that we don't expect or understand since there is no "logical" process involved.

And there lie the core of the ethical problem: Since we don't know for sure what the program will come up with, we can't guarantee that we will deem the results, or even the reasons for it, ethical.

 

The final question is: Is the judgment call from an human inherently superior or even fundamentally different from the one made by a NN AI?

Right now, all I would say is that humans are capable of taking more parameters into account and especially abstract parameters that are obviously harder to translate into a numerical values. The computer, on the other hand, have the advantage of being able to work with a lot more data than what a human is capable of memorizing and will be statistically more often correct than the human at the risk of answering the wrong question.

Link to comment
Share on other sites

19 minutes ago, Alzoc said:

 

In a way there is no theory needed

/.../

 

   It is needed, i didn't pulled this argument out of nowhere, this is was said directly by one of researches in neural network AI systems (SEED, an EA R&D department), he told that this is one of major hindrance for NN AI development. And this is a problem for everyone who works in that field, including military.

   Trying to make an AI system without theoretical foundation is like trying to design a gun without ability to do calculations of weapon parts stress, pressure, impulse, etc. You can make something that kind of shoots most of the time, but if you are going to be given a task to design a gun that weights X kg, have XYZ dimensions, have certain ROF, catridge, groups/accuracy requirments, you will have a very hard time to meet those requirements.

Link to comment
Share on other sites

I think that it's kinda what I meant?

 

You don't need to understand how it work to make it (mostly) work, it just make it much harder to design it so that it can give predictable results within a set of constraints.

You can achieve a certain percent of success by (kind of blind) trial and error and by feeding it enough data. You just don't know the reasons of why it sometimes works or fail.

And when you're not sure that it will work all the time and you don't know the reasons for the eventual failures, you can't put fail-safes in place making it dubious to give the program the right to make life or death decisions.

Link to comment
Share on other sites

20 minutes ago, Alzoc said:

/.../

You can achieve a certain percent of success by (kind of blind) trial and error

/.../

   The point is that you might as well not achieve anything with blind trial and error and find yourself in a dead end with budget at 0 and no time left to try something else and no possible way to predict how much more you will need of both. Which is not what you want when you design software for military use. Army may play with AI now for a bit and there is a real chance of not getting anything workable in the field and left aside for several years untils something changes.

   Fail safes can be implemented even if you don't understand exact workings of decision making system of AI, as fail safes are placed on AI output (can be done as separate programm), which are known things. If you make tank to be driven by AI, it will not suddenly take off and fly to the Moon. 

   AI gunners will not have trigger in their hands for long time after they will start to make their way into ground military vehicles.

   AI systems are better to be implemented on mineclearing vehicles and support/logi stuff first, which AFAIK US army plays around as well (logistical trucks with some sort of autonomous driving system).

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...