|
Post by shurugal on Dec 6, 2016 18:29:24 GMT
Well, there's unaided visibility, and then there's telescopes. Even small telescopes can pick up significantly more than an unaided eye, and you'd only need a mirror diameter of 2.4m to get the equivalent of the Hubble Space Telescope, so you could probably stick a few of those on what would essentially be an AWACS ship or drones. With a good infrared telescope you could probably pick up the exhaust plumes from drones and missiles, as long as your on-board computer could filter out the background. the problem with using telescopes to find things is that the more you zoom in, the longer it takes you to scan the whole sky. Searching the entire sky with the Hubble would take literally a megennia (1 My). Even searching a relative small area with relatively small magnification can take a prohibitively long time: For example, anyone who uses a serious military-oriented flight simulator to do ground attack can tell you that it takes dozens of minutes to scan a battlefield with a CCTV sensor, in which time the enemy has probably already accomplished their objectives. Of course, all of this brings into question how we find ships orbiting other planets in the game... I guess the most simple explanation is ground-based observation units reporting on positions.
|
|
|
Post by amimai on Dec 6, 2016 18:36:43 GMT
You assume the correction burn would be in sensor range... they can just burn once at 10Mm and then terminal burn when under 3-400km or whenever they enter radar range.
Your Newtonian mechanics can be used against you, after all ship engines are many times brighter and easier to locate. As long as I can see you and you can't see me I can predict your motion and adjust to hit you without being visible.
|
|
|
Post by ross128 on Dec 6, 2016 19:51:20 GMT
And just how are you going to adjust without lighting any engines? That's the big thing that makes stealth in space hard. Any engine powerful enough to be interesting is going to be bright and hot in some spectrum, and as soon as you cut that engine your course becomes trivial to predict. So you're likely to always know where the enemy is, and you pretty much have to assume that they know where you are. A fleet can only really be as stealthy as its largest ship, because once the largest ship's torch has been spotted, you now have a narrow area that you can focus high-powered optics on. And if you have eyes on their point of origin (the ship, station, or rock they're launching from), you can easily track the entire trajectory from end to end using telescopes and math.
|
|
|
Post by shurugal on Dec 6, 2016 20:39:17 GMT
And just how are you going to adjust without lighting any engines? That's the big thing that makes stealth in space hard. Any engine powerful enough to be interesting is going to be bright and hot in some spectrum, and as soon as you cut that engine your course becomes trivial to predict. So you're likely to always know where the enemy is, and you pretty much have to assume that they know where you are. A fleet can only really be as stealthy as its largest ship, because once the largest ship's torch has been spotted, you now have a narrow area that you can focus high-powered optics on. And if you have eyes on their point of origin (the ship, station, or rock they're launching from), you can easily track the entire trajectory from end to end using telescopes and math. The larger ship, and the lower ship in the gravity well, will always have the disadvantage in this scenario. The larger ship requires more energy expended to achieve the same dV, and the deeper it is in the gravity well, the more dV it must expend to change course. A missile high in the gravity well would need only a few mm/s dV to match many meters/s of a ship lower down, which could be accomplished via release of compressed gas. It would therefore make sense for long-range stealth missile attacks to start off by boosting them into a high elliptical orbit, especially while occluded by the planet, making midcourse guidance update at apogee on compressed gas, and then saving the hot engine for terminal engagement. Under this scenario, you would never pick the missile up until it entered radar range, by which point your only practical defense will be to shoot it down.
|
|
|
Post by thorneel on Dec 6, 2016 20:55:27 GMT
This is the only serious, realistic space stealth design I have ever heard about. And the entire internet has collectively tried before internet was even a thing.
|
|
|
Post by nykoliski on Dec 6, 2016 21:07:13 GMT
The Atomic Rockets website has an excellent discussion on the hard science of infrared detection in space and address many of the points brought up, such as resolution, distance, and scanning time: www.projectrho.com/public_html/rocket/spacewardetect.phpThis page actually references a science discussion on google groups, which is where most of the math comes from. The short version is that a modern IR telescope, optimized for exhaust plume detection spectrum, could detect the space shuttle firing its attitude control thruster (RCS) at a range of 15 million kilometers. Suffice to say that higher thrust engines, such as chemical or NTR, are going to be detected at an even greater range.
|
|
|
Post by shurugal on Dec 6, 2016 22:24:57 GMT
The short version is that a modern IR telescope, optimized for exhaust plume detection spectrum, could detect the space shuttle firing its attitude control thruster (RCS) at a range of 15 million kilometers. Suffice to say that higher thrust engines, such as chemical or NTR, are going to be detected at an even greater range. my question is over what field of view can it do this? a single sensor that can cover 5-30 degrees of sky would be invaluable militarily, but one that only covers a handful of arcseconds would be useless for spotting, only good for tracking.
|
|
|
Post by newageofpower on Dec 6, 2016 22:46:03 GMT
This is the only serious, realistic space stealth design I have ever heard about. Learned something new. Somewhat dubious at the claims of extreme efficiency for the Solar Furnace, but if it's actual efficiency is even close this should work.
|
|
|
Post by nykoliski on Dec 6, 2016 22:56:00 GMT
The short version is that a modern IR telescope, optimized for exhaust plume detection spectrum, could detect the space shuttle firing its attitude control thruster (RCS) at a range of 15 million kilometers. Suffice to say that higher thrust engines, such as chemical or NTR, are going to be detected at an even greater range. my question is over what field of view can it do this? a single sensor that can cover 5-30 degrees of sky would be invaluable militarily, but one that only covers a handful of arcseconds would be useless for spotting, only good for tracking. Here is the answer, straight from Atomic Rockets: "A full spherical sky search is 41,000 square degrees. A wide angle lens will cover about 100 square degrees (a typical SLR personal camera is about 1 square degree); you'll want overlap, so call it 480 exposures for a full sky search, with each exposure taking about 350 megapixels. Estimated exposure time is about 30 seconds per 100 square degrees of sky looking for a magnitude 12 object (which is roughly what the drive I spec'd out earlier would be). So, 480 / 2 is 240 minutes, or about 4 HOURS for a complete sky survey. This will require signal processing of about 150 gigapizels per two hours, and take a terabyte of storage per sweep." The issue appears to be having enough processing power to analyze the exposures, rather than how long it takes the IR sensor to physically scan the sky. If the processing takes a significant amount of time, then it probably makes sense to have a high resolution sensor for strategic detection, and a low resolution sensor for tactical tracking, which I think is what you are getting at. Bottom line though, at the rate at which computing power is increasing, this probably isn't going to be a big issue.
|
|
|
Post by cutterjohn on Dec 6, 2016 23:30:18 GMT
Well, there's unaided visibility, and then there's telescopes. Even small telescopes can pick up significantly more than an unaided eye, and you'd only need a mirror diameter of 2.4m to get the equivalent of the Hubble Space Telescope, so you could probably stick a few of those on what would essentially be an AWACS ship or drones. With a good infrared telescope you could probably pick up the exhaust plumes from drones and missiles, as long as your on-board computer could filter out the background. the problem with using telescopes to find things is that the more you zoom in, the longer it takes you to scan the whole sky. Searching the entire sky with the Hubble would take literally a megennia (1 My). Even searching a relative small area with relatively small magnification can take a prohibitively long time: For example, anyone who uses a serious military-oriented flight simulator to do ground attack can tell you that it takes dozens of minutes to scan a battlefield with a CCTV sensor, in which time the enemy has probably already accomplished their objectives. Of course, all of this brings into question how we find ships orbiting other planets in the game... I guess the most simple explanation is ground-based observation units reporting on positions. You're not scanning the whole sky. Ships are large and can be spotted, so you'd be watching the ship. qswitched subscribes to the 'spammed microdrone sensor platforms' theory, so you'd be able to watch what they were doing and where the missiles were going. When they're powered, you can see them. When they're unpowered, you do the math and keep track of where they are.
|
|
|
Post by amimai on Dec 7, 2016 0:10:57 GMT
the problem with using telescopes to find things is that the more you zoom in, the longer it takes you to scan the whole sky. Searching the entire sky with the Hubble would take literally a megennia (1 My). Even searching a relative small area with relatively small magnification can take a prohibitively long time: For example, anyone who uses a serious military-oriented flight simulator to do ground attack can tell you that it takes dozens of minutes to scan a battlefield with a CCTV sensor, in which time the enemy has probably already accomplished their objectives. Of course, all of this brings into question how we find ships orbiting other planets in the game... I guess the most simple explanation is ground-based observation units reporting on positions. You're not scanning the whole sky. Ships are large and can be spotted, so you'd be watching the ship. qswitched subscribes to the 'spammed microdrone sensor platforms' theory, so you'd be able to watch what they were doing and where the missiles were going. When they're powered, you can see them. When they're unpowered, you do the math and keep track of where they are. "ships are large"... o boy let me explain how small a ship is at 1Mm: at 1Mm a 150m ship appears to be 25e-6m or 25 micrometers across... lets put that into perspective shall we? a bacteria is 10 micrometers in size a human hair is around 50 micrometer diamiter a sheet of paper is 75 micrometers thick and a grain of sand is 150 micrometers good luck, you will need it... you are more likely to see artefacts generated by the lens then you are an enemy ship using a wide angle lens to scan for targets
|
|
|
Post by ross128 on Dec 7, 2016 0:21:19 GMT
It's not as difficult as it sounds, the HST was able to spot a galaxy 1/25th of the Milky Way's size from 13.4 billion LYs away. Granted, at that distance there was a light lag of 13.4 billion years, but if we can snap a picture of something that far away, I think we can rather comfortably spot anything inside the Oort Cloud.
We are also currently tracking almost 3,000 asteroids with a diameter of 30m or less as part of the Near Earth Object program, which we can see when they are illuminated by the sun and can otherwise be tracked with math.
|
|
|
Post by amimai on Dec 7, 2016 0:33:59 GMT
It's not as difficult as it sounds, the HST was able to spot a galaxy 1/25th of the Milky Way's size from 13.4 billion LYs away. Granted, at that distance there was a light lag of 13.4 billion years, but if we can snap a picture of something that far away, I think we can rather comfortably spot anything inside the Oort Cloud. We are also currently tracking almost 3,000 asteroids with a diameter of 30m or less as part of the Near Earth Object program, which we can see when they are illuminated by the sun and can otherwise be tracked with math. the Hubble telescope also weighs 11t and is 14m long, not to mention that the exposure times it uses to take 1 photo is in the order of hours or days, I don't know the exact details but the short exposure images are 37hours 10 minutes... (standard deep field : 37 hours, deep field : 23 days (what I think you were referring to) source:wiki) source: www.stsci.edu/hst/stis/software/planning/etc/etc.stsci.edu/etc/input/stis/imaging/ps: telescopes also look at bright objects, with high emission or reflection of light, everything a stealth ship won't be. Even then Pluto look more like a pixilated blob then a planet on most images
|
|
|
Post by shurugal on Dec 7, 2016 1:19:39 GMT
my question is over what field of view can it do this? a single sensor that can cover 5-30 degrees of sky would be invaluable militarily, but one that only covers a handful of arcseconds would be useless for spotting, only good for tracking. Here is the answer, straight from Atomic Rockets: "A full spherical sky search is 41,000 square degrees. A wide angle lens will cover about 100 square degrees (a typical SLR personal camera is about 1 square degree); you'll want overlap, so call it 480 exposures for a full sky search, with each exposure taking about 350 megapixels. okay, but we're not talking about a typical SLR personal camera with a wide-angle lense, sense a wide-angle lense is not a telescope. What angle of the sky is covered in a single exposure by a telescope capable of detecting the OMS plume of the space shuttle at 15Mm? I would be willing to bet a fair sum that it is a number expressed in radians or arcseconds, not degrees.
|
|
|
Post by nykoliski on Dec 7, 2016 3:50:35 GMT
Here is the answer, straight from Atomic Rockets: "A full spherical sky search is 41,000 square degrees. A wide angle lens will cover about 100 square degrees (a typical SLR personal camera is about 1 square degree); you'll want overlap, so call it 480 exposures for a full sky search, with each exposure taking about 350 megapixels. okay, but we're not talking about a typical SLR personal camera with a wide-angle lense, sense a wide-angle lense is not a telescope. What angle of the sky is covered in a single exposure by a telescope capable of detecting the OMS plume of the space shuttle at 15Mm? I would be willing to bet a fair sum that it is a number expressed in radians or arcseconds, not degrees. I see what you are getting at. This particular IR camera used in the Atomic Rockets comes from the google groups discussion and the specifications are as follows: "Telescope - 2 meter aperture, f/2 optics. Dall-Kirkham folded Cassegrain, most likely, but with f/2 it isn't as vital as with some systems. Detector - Tektronix 2048 x 2048 pixel CCD array, 350-1000 nm response, 80% quantum efficiency, 27 um pixel size. Angular resolution - 0.0004 degrees (detector-limited) Field of view - 0.8 degrees Scan rate - 3600 seconds for full sky assumed, thus 0.7 seconds per FoV Detection Threshold - 2.5E-17 watts per square meter, at 1:1 Signal to Noise over zodiacal background and 1E-9/pixel false positive rate." I'm not going to pretend I understand all of this, but the numbers you are looking for is the angular resolution (.0004 degrees) and field of view (.8 degrees). Because the FoV is limited, the camera will take multiple exposures to capture the entire sky; at .7 seconds per exposure, it comes out to be about 3600 seconds for a full sky scan. Also, if I remember my metric prefixes correctly, 15 million km is 15 Gm.
|
|