|
Post by dragon on Jan 29, 2019 17:27:30 GMT
I'm not saying OCR isn't possible (wasn't born just yesterday, and I can even tell You a real-youth story of 'how it's done on a medium scale' if You're interested), but indeed what You propose is very crude, doomed-by-its-nature and slow-working (e.g. 'laggy') approach. You seem to like quite a bit of challenge, eh? Of course it's slow, but I was only addressing possibility, not feasibility, which should be clear from my post. If a problem can be solved at all, then it can be solved no matter how slow and inefficient the algorithm to do that is. It's just a matter of giving it either enough time and enough processing power. The Starcraft people didn't run their AI on a potato laptop, either (and not just because Starcraft II wouldn't run on a potato in first place). And no, it's not doomed by its nature, in fact, operating an UI designed for humans, including interpreting text in images, is the easy part of making such an AI. Wolfram Mathematica, which I use for scientific calculations, has functions to interpret handwritten equations, although I never had an opportunity to test them in practice. COADE's interface, with its clear fonts and simple buttons, is easier to process than that. It's also completely obvious nobody will ever do something like that in practice, if only for the reason that COADE is such an extremely niche game that nobody will fund an AI project based on it, even if you could find a team interested in doing that. Starcraft is, in Korea, a national sport on the level of soccer in England. An update to the game will break the AI, of course, but a deterministic system will always have a problem with arbitrary changes in the environment. Someday, an AI might be developed that could be able to include those as a "metagame" of sorts, but even if that's possible, it's far off.
|
|
|
Post by AtomHeartDragon on Jan 29, 2019 18:02:58 GMT
If an AI manages to devise a winning strategy for countering drones with manned capitals, will it speak for or against AI supremacy?
Asking for a friend.
|
|
|
Post by cipherpunks on Jan 29, 2019 18:51:34 GMT
The Starcraft people didn't run their AI on a potato laptop Let me guess. Due to abundance of 'off-the-shelf' AI libraries, they used Python which still has global interpreter lock like 1990s never ended, right? ;) This one, despite being 'fat', is awesome software. I, however, am finding that most of the time I'm okay with using either GNU Octave, which is slimmer and free, or - am guilty using above mentioned Python ;) If I needed more speed (and I don't for now) - I'd code in C++ because doing that correctly is a challenge, but it seems easier to leverage heterogenous performance with it, and relevant libs exist.
Why, I'd try, in my spare time ofc, only not with current state of game. Never say never, TEH internetz are huge, lots of nerds are dwelling there.
If an AI manages to devise a winning strategy for countering drones with manned capitals, will it speak for or against AI supremacy? You mean hypothetically IRL? Well, if an AI manages to somehow manipulate humans into risking their meaty lives riding a flame to certain death among the stars, then said humans can be said to become analog peripherals of said AI. It will only speak about mastery of AI in using every possible peripheral. Might be still too early to speak of supremacy after that, as human peripherals have other uses too. Edited to add that this (AIs using humans as peripherals) was further explored in Iain Banks' Culture series, of which I'd recommend The Player of Games.
|
|
|
Post by airc777 on Jan 30, 2019 3:14:38 GMT
If an AI manages to devise a winning strategy for countering drones with manned capitals, will it speak for or against AI supremacy? Asking for a friend. If it could be determined that it was a conscious decision then we would have to address the possibilities that it is either of friendly intent or at least deliberately friendly in appearance.
|
|
|
Post by RiftandRend on Jan 31, 2019 3:16:29 GMT
BTW, our thing as a species is creativity. A computer, no matter how cleverly programmed, is still only a computer, that is a deterministic calculating machine. However complex, they will never have imagination. Starcraft II is a pretty well defined game, with fully known, static rules. Humans, on the other hand, can handle scenarios where rules change and/or are unknown. Not everybody can do it well, but that's what "the real world" ultimately is. Playing a game like Starcraft II is an important milestone, but it's still not anywhere near what a human can do, not to mention it seems to be imitating human behavior instead of going its own way (low APM counts are a dead giveaway, if it came up with "its own" strategies, it'd be milking its superiority in that area for everything it's worth). So I'll start wondering when it can play by using a camera pointed at the screen and a robotic manipulator for the mouse and keyboard. Because those are much harder problems than the game itself. The AI was 'imitating' human behavior because it was trained by viewing human replays. It's APM and other abilities were artificially limited, as making a micro-bot was not the goal of deep mind. If APM was unlimited, then it could easily beat anything with blink-stalker micro.
|
|
|
Post by dragon on Feb 2, 2019 22:56:24 GMT
The article outright states that APM was limited because it learned from humans with limited APM. Haven't seen anything indicating otherwise. Limiting other abilities was also discussed, they had versions both with and without limited vision, for example.
It makes sense that it would imitate limitations of human players, as well as their successes. That was my point about this being, fundamentally, a deterministic system. If it's not in the training data, then the AI won't "think" of it on its own. A human could, just based on the knowledge of the game and his/her own capabilities.
|
|