|
Post by Enderminion on Feb 15, 2017 13:20:00 GMT
Or, more to the point of space, "Rogue System" the DCS of space I recall videos of that game... (For me)There's just something tactile that's missing with modern gaming and interfacing with that level of detail. Wandering further off topic: Makes me wonder what the gunnery or astronavigation consoles look like that our crewmembers are using... What are you looking at? they would probaly be computers of some form or the other
|
|
|
Post by bdcarrillo on Feb 15, 2017 13:27:29 GMT
I recall videos of that game... (For me)There's just something tactile that's missing with modern gaming and interfacing with that level of detail. Wandering further off topic: Makes me wonder what the gunnery or astronavigation consoles look like that our crewmembers are using... What are you looking at? they would probaly be computers of some form or the other A brief flight of fancy, but I'd envision far future interfaces more akin to the docking control folks in the matrix. Near future, something closer to a curved bubble over a couch. Complete field of view coverage. Hmm... Makes me wonder about the basis for the current crew quantity. Single-roles make for an easy way to divvy things out, but hardly seems effective. Do we really need a couple guys staring at a reactor 24/7 (sols of course)?
|
|
|
Post by Enderminion on Feb 15, 2017 13:32:51 GMT
What are you looking at? they would probaly be computers of some form or the other A brief flight of fancy, but I'd envision far future interfaces more akin to the docking control folks in the matrix. Near future, something closer to a curved bubble over a couch. Complete field of view coverage. Hmm... Makes me wonder about the basis for the current crew quantity. Single-roles make for an easy way to divvy things out, but hardly seems effective. Do we really need a couple guys staring at a reactor 24/7 (sols of course)? I thought that too once, then I played Angels fall first and saw this That guy is strapped into a chair facing down (artifical gravity) doing whatever s/he is doing, I would not do that and it makes me question, what this person could be doing so that position is the best. Completly unrelated but the COs and bigwigs would probaly have a holotank and the lower ranks might have smaller holograms
|
|
|
Post by bdcarrillo on Feb 15, 2017 13:44:36 GMT
Talk about ergonomics fail...
I doubt we'd ever see a bunch of staff running around (Battlestar Galactica, Spaceballs, Star Wars... Etc) and see a lot more automation. I do have a hard time believing that I need ~30 people to run a single laser boat.
|
|
|
Post by omnipotentvoid on Feb 15, 2017 16:32:30 GMT
Or, more to the point of space, "Rogue System" the DCS of space I recall videos of that game... (For me)There's just something tactile that's missing with modern gaming and interfacing with that level of detail. Wandering further off topic: Makes me wonder what the gunnery or astronavigation consoles look like that our crewmembers are using... They likely aren't looking at many of those consoles. Both targeting and navigation are very sensitive things that may both require extreme reaction times and extreme accuracy, which humans aren't really capable of. Mostly you'd be looking at info screens and diagnostics readouts to make sure the algorithms and components are working correctly and a limited user interface to tell the ship what you want it to do. There will be manual overrides, but they will be rather limited in their GUI since their meant as a backup, rather than a way of navigation. Perhaps you'll find some more advanced stuff on private ships, whose owners are piloting/gunner geeks, with true manual control fidelity for personal sport rather than efficiency. The only consoles that are truly necessary (where humans can outclass computers) are tactical and strategical interfaces. Tactics and strategy aren't problems solvable by computers, belonging at least to the complexity class of chest and probably being a few steps higher in the hierarchy of complexity. Given no major change to computational architecture on a theoretical level (basically: quantum computers, or any other form of processor architecture won't solve this problem either, you need a different architecture for the principle of problem solving), humans will still be more adapt at tactics and strategy than computers, requiring crew to do this.
|
|
|
Post by underwhelmed on Feb 16, 2017 10:25:08 GMT
Tactics and strategy aren't problems solvable by computers, belonging at least to the complexity class of chest and probably being a few steps higher in the hierarchy of complexity. Given no major change to computational architecture on a theoretical level (basically: quantum computers, or any other form of processor architecture won't solve this problem either, you need a different architecture for the principle of problem solving), humans will still be more adapt at tactics and strategy than computers, requiring crew to do this. I guess somebody forgot to tell Google's Deepmind that...
|
|
|
Post by omnipotentvoid on Feb 16, 2017 15:27:11 GMT
Tactics and strategy aren't problems solvable by computers, belonging at least to the complexity class of chest and probably being a few steps higher in the hierarchy of complexity. Given no major change to computational architecture on a theoretical level (basically: quantum computers, or any other form of processor architecture won't solve this problem either, you need a different architecture for the principle of problem solving), humans will still be more adapt at tactics and strategy than computers, requiring crew to do this. I guess somebody forgot to tell Google's Deepmind that... I don't follow?
|
|
|
Post by darkwarriorj on Feb 16, 2017 15:52:29 GMT
Google's Deep Mind, in recent months, outcompeted and crushed a human Weiqi champion. For reference, Weiqi, also known as Go in Japan or Baduk in Korea (I think), is an ancient (1000 BCE), incredibly simple, yet incredibly deep game which just flatly cannot be brute-forced calculate all paths to victory played. There were some calculations to that effect, though I forgot where. It is what one would call above the complexity level of chess. Played on a 19 by 19 board, two players alternative turns playing down their pieces (black or white). The pieces do not move. Pieces which are totally surrounded on all sides are captured and removed from the game; but destruction is not the goal of the game. Rather the end goal is to win the game by having as much controlled territory as possible by the end of the game, which is basically when both players decide "Yeah, looks like we can't really move the borders much of anywhere." The big news here is not that Google's Deep Mind AI managed to beat a world Weiqi champion. That alone is impressive enough, considering the fact that it is still impossible to brute force calculate all solutions, and Weiqi AIs before this were so pathetic as to be unable to defeat even a novice at the game. The terrifying news is how Deep Mind did it. In particular: - Using innovative and completely unheard of stratagems and playstyles to crush its human opponent. For reference, this is a game people have been competitively playing for over three thousand years. The AI made significant novel innovations in the very playing style of this game, starting with moves normally derided as utterly stupid and pulling off complete victory from it in a way that demonstrated that it was planned from the start. In other words, it is smart, capable of recognizing patterns, and above all - creative. Or so I've read. I play Weiqi, but I get crushed by said novice AI . - Having literally zero pre-programmed knowledge about the game. The creators of this AI didn't build it to play Weiqi; they built it to learn and adapt to the task it is to do. The creators of the AI had no idea what the AI was doing; they had to hire a Weiqi expert to reinterpret the AI's moves to the AI developers themselves.In other words, it's not built to crush Weiqi, it is built to do your job. - Its decisions are opaque. If I remember correctly, it is a "neural network" type of AI. In other words, utilizing nodes and connections between these nodes in a way reminiscent of actual brains. It's not an obvious if-else branching method, or as least as far as I've read. We can understand how and why these AIs function, but ask for "what's the code for making this move"? and the best we can do is point at the entire AI. Also, it's not being run on any particularly unique hardware. I do recall that such AIs tend to be run on graphics processing units rather than CPUs because GPUs perform calculations very similar to those needed by the AI a lot, but otherwise it's not exactly new and novel hardware that allowed the AI to do this. So yeah, there's that. www.youtube.com/watch?v=SUbqykXVx0A Video for reference.
|
|
|
Post by gedzilla on Feb 16, 2017 16:34:15 GMT
Google's Deep Mind, in recent months, outcompeted and crushed a human Weiqi champion. For reference, Weiqi, also known as Go in Japan or Baduk in Korea (I think), is an ancient (1000 BCE), incredibly simple, yet incredibly deep game which just flatly cannot be brute-forced calculate all paths to victory played. There were some calculations to that effect, though I forgot where. It is what one would call above the complexity level of chess. Played on a 19 by 19 board, two players alternative turns playing down their pieces (black or white). The pieces do not move. Pieces which are totally surrounded on all sides are captured and removed from the game; but destruction is not the goal of the game. Rather the end goal is to win the game by having as much controlled territory as possible by the end of the game, which is basically when both players decide "Yeah, looks like we can't really move the borders much of anywhere." The big news here is not that Google's Deep Mind AI managed to beat a world Weiqi champion. That alone is impressive enough, considering the fact that it is still impossible to brute force calculate all solutions, and Weiqi AIs before this were so pathetic as to be unable to defeat even a novice at the game. The terrifying news is how Deep Mind did it. In particular: - Using innovative and completely unheard of stratagems and playstyles to crush its human opponent. For reference, this is a game people have been competitively playing for over three thousand years. The AI made significant novel innovations in the very playing style of this game, starting with moves normally derided as utterly stupid and pulling off complete victory from it in a way that demonstrated that it was planned from the start. In other words, it is smart, capable of recognizing patterns, and above all - creative. Or so I've read. I play Weiqi, but I get crushed by said novice AI . - Having literally zero pre-programmed knowledge about the game. The creators of this AI didn't build it to play Weiqi; they built it to learn and adapt to the task it is to do. The creators of the AI had no idea what the AI was doing; they had to hire a Weiqi expert to reinterpret the AI's moves to the AI developers themselves.In other words, it's not built to crush Weiqi, it is built to do your job. - Its decisions are opaque. If I remember correctly, it is a "neural network" type of AI. In other words, utilizing nodes and connections between these nodes in a way reminiscent of actual brains. It's not an obvious if-else branching method, or as least as far as I've read. We can understand how and why these AIs function, but ask for "what's the code for making this move"? and the best we can do is point at the entire AI. Also, it's not being run on any particularly unique hardware. I do recall that such AIs tend to be run on graphics processing units rather than CPUs because GPUs perform calculations very similar to those needed by the AI a lot, but otherwise it's not exactly new and novel hardware that allowed the AI to do this. So yeah, there's that. www.youtube.com/watch?v=SUbqykXVx0A Video for reference. Well, ... Fuck
|
|
|
Post by omnipotentvoid on Feb 16, 2017 16:52:31 GMT
Google's Deep Mind, in recent months, outcompeted and crushed a human Weiqi champion. For reference, Weiqi, also known as Go in Japan or Baduk in Korea (I think), is an ancient (1000 BCE), incredibly simple, yet incredibly deep game which just flatly cannot be brute-forced calculate all paths to victory played. There were some calculations to that effect, though I forgot where. It is what one would call above the complexity level of chess. Played on a 19 by 19 board, two players alternative turns playing down their pieces (black or white). The pieces do not move. Pieces which are totally surrounded on all sides are captured and removed from the game; but destruction is not the goal of the game. Rather the end goal is to win the game by having as much controlled territory as possible by the end of the game, which is basically when both players decide "Yeah, looks like we can't really move the borders much of anywhere." The big news here is not that Google's Deep Mind AI managed to beat a world Weiqi champion. That alone is impressive enough, considering the fact that it is still impossible to brute force calculate all solutions, and Weiqi AIs before this were so pathetic as to be unable to defeat even a novice at the game. The terrifying news is how Deep Mind did it. In particular: - Using innovative and completely unheard of stratagems and playstyles to crush its human opponent. For reference, this is a game people have been competitively playing for over three thousand years. The AI made significant novel innovations in the very playing style of this game, starting with moves normally derided as utterly stupid and pulling off complete victory from it in a way that demonstrated that it was planned from the start. In other words, it is smart, capable of recognizing patterns, and above all - creative. Or so I've read. I play Weiqi, but I get crushed by said novice AI . - Having literally zero pre-programmed knowledge about the game. The creators of this AI didn't build it to play Weiqi; they built it to learn and adapt to the task it is to do. The creators of the AI had no idea what the AI was doing; they had to hire a Weiqi expert to reinterpret the AI's moves to the AI developers themselves.In other words, it's not built to crush Weiqi, it is built to do your job. - Its decisions are opaque. If I remember correctly, it is a "neural network" type of AI. In other words, utilizing nodes and connections between these nodes in a way reminiscent of actual brains. It's not an obvious if-else branching method, or as least as far as I've read. We can understand how and why these AIs function, but ask for "what's the code for making this move"? and the best we can do is point at the entire AI. Also, it's not being run on any particularly unique hardware. I do recall that such AIs tend to be run on graphics processing units rather than CPUs because GPUs perform calculations very similar to those needed by the AI a lot, but otherwise it's not exactly new and novel hardware that allowed the AI to do this. So yeah, there's that. www.youtube.com/watch?v=SUbqykXVx0A Video for reference. Deep mind is a neural network capable of learning to solve discrete problems incredibly well. The reason the "problem" of tactics cannot be solved easily by a computer, is because it isn't a discrete problem, but rather class of problem. A tactical encounter would be a discrete problem. To elaborate on your example: the beauty of the human brain is not it's ability to find an optimal solution to playing the game on a 19x19 grid, but rather its ability to adapt in such a way that it can play reasonably well on an arbitrarily sized grid. Look into computational complexity theory for more details. To summarize what is relevant here: self learning AIs (in this case specifically neural networks) are capable of finding highly efficient solutions to specific (discrete) problems given enough tries. In combat, when the effective "size" of the problem (and perhaps even the complexity) is arbitrary and you have only limited tries (only one most of the time), no known modern computational technique can achieve anywhere near the capabilities of a human. Essentially you need a true AI that is faster than a human in order to replace them. Deep mind is far away from being a true AI.
|
|
|
Post by Easy on Feb 16, 2017 17:45:44 GMT
I refuse to build a coil or rail that exceeds 90% efficiency. You can stop the 5 ton, 1.5TJ slugs with less than 10m of nitrile rubber, if that makes you feel any better. Most of the energy they have seems to vanish on impact. Doesn't even leave glowing craters in the rubber. Hypervelocity slugs have a tendency to obliterate itself against the shockwave of the front layer of the slug and top layer of armor vaporizing each other. You can end up making a shallow crater instead of a hole.
|
|
|
Post by Enderminion on Feb 16, 2017 17:47:07 GMT
You can stop the 5 ton, 1.5TJ slugs with less than 10m of nitrile rubber, if that makes you feel any better. Most of the energy they have seems to vanish on impact. Doesn't even leave glowing craters in the rubber. Hypervelocity slugs have a tendency to obliterate itself against the shockwave of the front layer of the slug and top layer of armor vaporizing each other. You can end up making a shallow crater instead of a hole. APCR shells can solve that problem, at least partially
|
|
|
Post by Easy on Feb 16, 2017 18:02:54 GMT
Hypervelocity slugs have a tendency to obliterate itself against the shockwave of the front layer of the slug and top layer of armor vaporizing each other. You can end up making a shallow crater instead of a hole. APCR shells can solve that problem, at least partially APCR won't make much difference at speeds where yield strength and ultimate tensile strength become irrelevant.
|
|
|
Post by Enderminion on Feb 16, 2017 18:14:59 GMT
APCR shells can solve that problem, at least partially APCR won't make much difference at speeds where yield strength and ultimate tensile strength become irrelevant. Weaker material gets blasted in plasma expanding in all directions before the main penetrator hits the armour, I assume the stronger material can survive the plasma shock wave mostly intact. ___ _-_ _-_ _-_ _-_ thats what I mean by APCR where _ represents weaker material and - represents stronger material
|
|
|
Post by bigbombr on Feb 16, 2017 18:24:27 GMT
APCR won't make much difference at speeds where yield strength and ultimate tensile strength become irrelevant. Weaker material gets blasted in plasma expanding in all directions before the main penetrator hits the armour, I assume the stronger material can survive the plasma shock wave mostly intact. ___ _-_ _-_ _-_ _-_ thats what I mean by APCR where _ represents weaker material and - represents stronger material At extremely high velocities, crossectional density is more important than strength.
|
|