Meta-comment: I haven't seen this noted elsewhere, but there's been a very sudden and significant acceleration in the pace of space launch. So far there have been 22 launches this year, which is about double the pace of last year. And there are more than 180 launches planned -- again, double the pace of last year. The most launches ever to occur in a single year was 139 -- during the height of the space race in 1967. Although many of this year's planned launches will undoubtedly slip into next year, we're quite likely to set an all-time record:
Also notable is the fact that of the 22 launches this year, three have been the debuts of entirely new vehicles, two of which were entirely privately funded. There are likely to be several more private launch vehicle debuts this year.
This is a really significant shift for what had long been a moribund industry, and a lot of complementary innovations will be able to piggyback onto this momentum.
For the most part, yes. Changes in the regulatory system have been one of the critical enablers for this during the last decade. The key area where the regulations haven't particularly kept pace is in extra-terrestrial property rights.
I'm interested in what the total system gain, EIRP and power looks like in an Astranis spot beam from geostationary, as compared to a current-generation 4000 to 6500 kilogram geostationary satellite with Ku and Ka band spot beams.
I am optimistic but also skeptical. The size and power of the satellite will influence what the size of VSAT terminals needs to be, and also the earth stations/major teleports. The example size of the satellite shown in the URL is so much smaller than current geostationary satellites that I don't see how the Tx power from each transponder will be anywhere near the power on a much bigger, costlier satellite.
Let's say for example I have put together from industry standard components, a 3.0 meter compact cassegrain Ku-band antenna in a remote part of Nepal, with a 40W BUC and a relatively recent Comtech EF Data modem. Are you planning on selling 1:1 dedicated capacity SCPC (and MCPC) type transponder kHz on a monthly basis? Or are you planning on standardizing on your own type of VSAT hardware terminal in bulk and selling contended access only?
Where do you see your value proposition for high capacity IP trunk links as compared to an ISP buying a 2 x 1.8m dish o3b terminal, and dedicated capacity through o3b? What will your $/Mbps rate look like compared to o3b with a monthly spend of 2500 or 3500 dollars?
Hey there, I think you got the gist of it in a different part of this thread (each of our spacecraft has a fraction of the number of transponders as a typical large GEO). But if you'd like to discuss further I'd be happy to chat offline. Do you have an email address or other way that I could contact you?
I understand you are probably at a very early stage of product design for the satellite capabilities, so you are cagey and don't want to make any definitive statements of technical specs... But I'd rather not move the discussion off forum into private email when the other people reading this thread could possibly benefit from answers to my specific questions.
I am kind of concerned that your answers to others' questions in this thread are vague and noncommittal.
One more question. Do you intend to:
(A) lease raw transponder kHz capacity to third party end users and ISPs (eg: the ku spot beams I can get on Russian satellites covering Afghanistan and a check for $4500 USD a month,
or (b) is your business plan to operate both the satellites and the earth stations, and resell fully packaged VSAT services directly to end users?
We're lucky to work with a great group of investors-- the investors we work with at Andreessen Horowitz both have PhD's, one in CS, the other in EE. Our investor at Lux Capital has a PhD in RF engineering. Another of our investors has a PhD in particle physics. Couldn't ask for a more knowledgable, technically minded group.
I'm always happy to answer basic questions about what we're doing. But questions around very specific technical details, especially when it comes to things that give us a competitive advantage like our antenna design, just aren't appropriate for a public forum. Thanks for understanding.
For more detailed questions we may or may not be able to answer them via email. But certainly I'd be happy to take a look at it and if its appropriate put them in touch with the relevant member of the team. (See-- https://www.astranis.com/about/)
What effect does the small size of the satellite have on the cost and complexity of the terminal?
The small-aperture terminal market has been driven by GEO satellites getting bigger, a lot bigger. Yes, we can fit more satellite in a small package, and solar cells have gotten more efficient, but there are laws-of-physics issues involving the size of the antenna, power requirements, etc. that aren't going to go away.
Geosynchronous satellite spacing is determined by the ability of ground stations to resolve satellites so there is a certain number of satellites that can fly, so there is a pressure towards large high-performance satellites that can deliver the maximum capacity as opposed to launching fewer low-performance satellites. Already the high-capacity satellites have insufficient bandwidth to serve demand (otherwise people would just be getting satellite instead of asking for terrestrial internet) and I don't see how low-capacity satellites will actually help.
I see similar problem with the terminals for systems like the SpaceX constellation. To get "wireless equivalent" performance I see the ground terminal requiring some kind of electronically scanned array which would put the cost upwards of $6000.
There is a precedent for satellite services with a high-cost terminal, as back in the 1980s many people would spend about that much for an unlicensed satellite terminal to receive TV, but since that was pirate there was no subscription fee. Compare that to $100 a month for cable and that pays for itself in 5 years.
I have this funny feeling that next-gen satellite providers want to have the expensive terminal AND the expensive service -- possibly because of the "Juicero" issue that the people bankrolling them don't know what prices look like to the average customer.
Hi there, great questions. The satellites are spec'd to deliver the same power (EIRP) on the ground as traditional large GEO's, with each of our satellites having a fraction of the capacity. This is a key point-- it will take several of our satellites to do the same job as one traditional large GEO. But that's not only ok, it's much better. It means we can put up a smaller chunk of capacity as needed and not do the "all or nothing" approach of a large GEO.
The high capacity satellites do have insufficient bandwidth to serve demand, on that I agree. But that's not a spectrum limitation, its a limitation on how many such large GEO's have been launched. Which is a small number because each one is so expensive. There is still plenty more spectrum we can use for GEO telecoms, and more with proper frequency re-use schemes.
The point you make about the drive to large GEO's holds true if you assume only one spacecraft will ever be in a single orbital slot. But that's not the case. There's plenty of orbital slots where multiple GEOs are co-located, and this will increasingly be the case in the future. I'd argue the drive to large satellites was more driven by various incentives and systemic issues in the industry that drove them that direction. A much longer conversation, but it's things that can all be fixed.
And to answer your last question, because our satellites are spec'd to provide the same EIRP on the ground, the ground terminals used are the same as those in use today for GEO telecoms. These terminals are all off-the-shelf and very low-cost compared to the terminals necessary for a LEO-based network.
Your first paragraph: So basically rather than a "typical" configuration which might be 24 x Ku-band transponders, 36 MHz capacity each, supported by the solar arrays and power systemon a giant 4000 kg geostationary bus.
You're putting the Tx power into the equivalent of just one or two 36 MHz Ku transponders per satellite? Fewer transponders per satellite in a tradeoff for greater Tx power in a smaller bus?
The key here is spot beams, right? The big satellites have dozens, maybe hundreds of spot beams covering multiple countries but yours has a handful? And multiple satellites can share the same orbital slot if their spot beams are pointed in different places, right?
1. Our first design has a capacity of up to 10 Gbps. And we have a technical roadmap that can significantly increase that in future spacecraft designs.
2. For various reasons it doesn't make sense to go into that level of detail on a public forum.
Let’s see: they’re spending tens of millions of dollars to put something into geostationary orbit that will presumably last there for years. I’m thinking they’re probably using radiation hardened parts.
As someone who has done rad hard fpga design as well as radiation testing of components, I can say: that's too bad. Rad hard parts are so behind commercial, and have such high unit costs (RTG4 is ~90k each, 300krad TID), and the verification is grueling. It's fun to see your stuff leave the planet, though.
Maybe. If I remember correctly, the total ionizing dose (TID) at GEO is around double what it is at LEO. There are more single event effects (SEE) at GEO which would have to be handled. Most small sats at LEO use commercial parts and can operate for many years. The original Iridium constellation used commercial parts, though in LEO.
Why do you think this pin-point for-hire internet sat service is better than a constellation? Is it because you can profit right away without an initial setup cost of the constellation? But in the long run the constellation pays off and might drive you out of business, right?
If you think there is a rural/urban problem with optic fiber, you are jumping from the frying pan to the fire with LEO satellite constellations.
That's because an LEO satellite constellation has to cover the whole world (or almost all of it) to be able to cover any inhabited area at all. Thus it has to cover oceans, large roadless areas, deserts, mountains, etc. At least with optic fiber you only have to cover roads.
Note that high density areas can cause trouble on the other side -- you need to support the highest density worldwide that you support anywhere. Note that users in a place like New York City will generate noise affecting satellites 1000+ miles away, so if you don't ban people setting up accounts in dense areas, it will have to be priced so high that people with a lot of money looking for a backup connection will be priced out.
I am highly skeptical that the economics can work out for an LEO constellation.
This makes sense to me. Even with managing orbits you are still having a huge part of the fleet over empty areas. If the idea is to manage high bandwidth needs of autonomous ships, trucks, trains, planes, etc maybe it makes sense...
In a world where OneWeb and/or Starlink exist, how would a service with 10x worse latency compete? Are you betting that both will fail? Otherwise you'll only have a couple of years to recoup the entire cost of your system before it becomes obsolete.
We're compatible with all the major launch vehicle families-- SpaceX Falcon 9, Atlas V, Ariane 5, and some of the other foreign launchers. Part of the great thing about where things are in the industry today is that ride shares for micro-satellites (where you fly as a piggy-back alongside a traditional satellite) are pretty much a solved problem.
Where does the "industry’s targeted $75 per megabit per second per month for dedicated bandwidth" figure come from? Is it a satellite industry figure, or submarine cable cost, or normal terrestial fiber haul cost, or...?
$75 per Mbps per month is a weird figure. If they plan to resell transponder kHz capacity on a commercial basis, the Mbps capacity will be a function of the modulation scheme/coding density that can be pushed through a link. Let's say I write them a check for $1500 a month worth of transponder frequency, FDD, and set up a 3.0m earth station on a roof in Cologne, Germany and another identical 3.0m terminal in Skardu, Pakistan. Both with modern SCPC modems. This hypothetical example assumes crosslinked spot beam coverage between the two regions and appropriate satellite position in the geostationary arc.
The amount of Mbps I could push through that will be greater than if I do the same setup with a pair of 1.8m dishes. You can't know your Mbps figure for a dedicated geostationary satcom link until you know you RF link budget, system gain, and how close you can get to the Shannon limit. The same amount of kHz (same dollar spend per month) could be used at 16QAM 5/6 or at QPSK 1/2 modulations, with very different Mbps figures.
I don't see the novelty on this at all. Geostationary satellites are pretty much the standard for satellite internet for years. I had a broadband connection like that on my first job, 15 years ago.
It's off course very welcome if they can get higher transfer speeds, cleaner and lighter satellites and newer technologies to the mix, but that's it. The biggest problem with satellite internet: the latency, is not going to be solved.
Which is fine, I must say, since they are in the business of bringing Internet to where there's none. But these new Internet users will arrive without access to the Internet's most incredible features, such as real time voice and video communication, interactive website experiences and online gamming.
Funny how you say those are the most incredible features.
For me (and obviously this is personal preference, so please don't take offence) these are "nice to have" features.
What I'm really after are (and have been, since I kinda grew out of FPS games)
* google/$INFORMATION
* Wikipedia
* podcasts
* spotify/torrent/$MUSIC
* blogs/twitter
* youtube/netflix/torrent/$VIDEO
pretty much in that order. None of which would be a problem on satellite (albeit less convenient) but would make a day/night difference for me.
As for the "interactive website experiences", that's usually the part I like the least about a website, which is why I use uMatrix most of the time, making pages more readable (or, depending on the JS affinity of the people making them, completely unreadable)
Most large GEO's today still use purely analog repeaters. Having a true digital payload where we're doing digital signal processing on board the satellite is a significant step forward.
To get to the lower costs that we're aiming for, going to software defined radios was a necessary step. That's because it gives us the ability to build many satellites that are as identical as possible without hardwiring in the specific frequencies like they do with analog satellites today. It's hard to overstate the importance of that technology for what we're doing and the low cost targets we need to hit to get unconnected people online.
Isn't latency a major issue for connectivity with satellites in geosynchronous orbit? I'm curious what their approach is here, either technically or what customer use cases they are targeting.
The distance to GEO does add some latency due to the distance, that’s the main trade off. What we found in studying this problem for a long time was that 95% of internet traffic isn’t latency sensitive— CDN traffic, video streaming, audio streaming, file downloads, social media posts, etc. The bandwidth crunch is a huge problem to solve but we realized there is low hanging fruit here that we can go after by putting satellites one at a time in GEO and putting a dent in it immediately. vs LEO constellations where you have to put up hundred of satellites (at a cost of billions of dollars) just to get started.
I have worked over ssh over internet connections with satellite-class latency and I can say it is painful. (Emacs shell mode can help)
You probably think that 'the web' is a high latency application. It probably should be, and maybe it was in 2000. Since then, web developers have gotten into the habit of using AJAX indiscriminately, plus they feel pressured to add features such as customized fonts, advertising, third party tracking, etc. I am not sure if CDN is really a net positive when a web site might need to do 30 DNS lookups because it uses 30 CDNs. It just takes one of those lookups to be slow to obliterate the savings from the CDN. CDNs might help with the median, even the average load time, but I am not sure they help the 95% load time which is what causes customer pain.
Add up all those round trips and the overhead of access control (maybe those patents on slotted ALOHA for satellite applications have expired by now) and you are talking upwards of 0.5 sec and it doesn't take many round trips for that 0.5 sec to turn into 5 to 10 seconds.
Worst thing is that people who are developing locally or from places with fast connections to the data center will think these apps are really fast.
Agree on the AJAX calls. Modern web apps are a mess. Thankfully mobile apps are built quite differently, so that's an improvement. And the biggest potential market out there to serve is new users that will be coming online for the first time mobile-first.
Audio streaming not including voice calls I assume, because those are absolutely latency sensitive.
Are your terminals going to include TCP Performance Enhancing Proxies (PEPs)? The TCP three way handshake followed by the TLS handshake makes it kind of painful to browse on an unaccelerated network with a GEO hop. And this is before you get to the latency from the ground station to the website. The problem gets even worse in Web2.0 world where everything is running off of tiny queries that the web app developers assume will be back in 50ms or so.
TCP PEPs can only do so much too (cutting down on the roundtrips and ramping up the window size more quickly), eventually you hit the hard limits of physics.
From my experience it is not so hard to build completely terrestrial links that have latency that is large enough such that TCP works (althought it should not), but TLS handshake will timeout. For GEO links you will get exactly this behavior by using BGAN terminal over Bluetooth DUN profile (BT adds enough latency for this to happen).
The only time I've seen a terrestrial link with a latency so bad that TCP couldn't be established it also required you to install a poorly behaved app that reduced Window's TCP retransmit limit to 1 (from 3).
TLS setups however as you note are a whole new ball of wax, especially if the link has a bit of loss to go along with the latency.
I'd take SpaceX's latency claims with a big grain of salt. They'll only be true if you're in close geographic proximity to one of their ground stations. Once you're routing packets between satellites to get to a ground station all bets are off.
Because SpaceX is known to over-promise and under-deliver? They seem to have an extremely powerful institutional engineering capability. That's not fanboy talk e.g., we're going to mars! Just look at what they've accomplished. Landing a rocket on its business end is no mean feat -- for any engineering team, ever.
I know this is a YC forum but seeing the Astranis team's responses to people asking very incisive questions is troubling. Maybe they're far better at engineering than building confidence but I detect that sort of single minded dismissal (often of objective reality) that plagues a lot of entrepreneurs.
Maybe this is unfair but it sounds like someone got latched onto the idea of "micro-satellites" and won't let it go, ending up with "we can conceivably put up only X, so they'd have to be geosynchronous". Maybe there's a reason SpaceX is doing a LEO constellation, regardless of their unfair advantages in that regard.
Of course latency is an issue. Engineers will run away. Maybe you don't need them to succeed, maybe they are 0.1% of the market. Of course, where would Apple be without software engineer and designer buy in? Maybe the average customer shouldn't care, 'cause they're all high latency ok. Doesn't stop them from being swayed by marketing. A single infographic showing why low latency is better -- all else being equal -- what are customers going to choose?
Not sure why YC would have invested in this other than to broaden their portfolio. The chances of this company becoming a unicorn must be vanishingly small. If unicorns are no longer the VC mantra, cool, I've got a lifestyle business you can throw money at, provided there are zero expectations.
By all means prove me wrong and best of luck to you. Take this is as constructive criticism. If you're going to engage anyone in public bring your A game.
Because of the logistics of running a large number of ground stations, especially with the super narrow beams SpaceX is planning to use for their service. In order to achieve the latency numbers Musk talked about (beating terrestrial microwave transmission for front running purposes) you would have to deploy them like cell towers, and that's incredibly expensive.
SpaceX latency claims are based on the speed of light. Low earth orbit has only slightly longer line-of-sight distances between locations than the earth's surface, and:
- The speed of light is ~50% higher in vacuum than in fiber[1]
- Fiber on the ground doesn't follow straight lines
Actually, because the speed of light in vacuum is faster than that in an optical fibre, SpaceX does great as long as the satellite-to-satellite hops aren't congested and land near your actual destination. The question no one knows the answer for is how often that will be, compared to the case of the ground station being farther away from the destination.
There is a difference between the CPE (what you use to get the service) and what SpaceX uses to sink your traffic into the global internet. And in fact there is exactly zero incentive for somebody other than SpaceX to run the ground stations.
On the other hand it is completely possible that SpaceX's ground stations will use HW that is similar to their CPE box and thus they can build ridiculous amount of such ground stations (ie. at every IXP or so, althought the "lightweith non-IP transport" proclamation somehow precludes doing that)
That would be an interesting model for the ground stations. Everybody else builds large and capable ground stations with big dishes. Iridium for example provides global phone and slow data coverage using only one public ground station, but also uses a completely different band to talk with the ground stations vs. the handhelds.
According to this Motherboard article [1] from Jan 27, 2018, only the Tempe, AZ gateway handles non-military traffic, with the US DOD and Russia operating the two other (military) gateways. I can't remember if the other gateways were shutdown during Iridium's bankruptcy.
That was my first question too, once I saw that you'll be using geostationary satellites. But yes, I get your argument about acceptable latency vs cost. Sometimes I use Tor via nested VPN chains, for decent anonymity, and total rtt can be hundreds of ms or more. Web browsing, chats, downloads, and even ssh sessions, are all workable. And if push to talk is acceptable, even VoIP is workable at 1-2 sec rtt.
How is 10Gb/s achieved and how reliable is that BW in the presence weather? What is the practical coverage area of the satellite and what would the typical user BW be on a clear day? How would the BW change on a rainy day?
There's a reason that no other company tries to use geosynchronous orbit.
Stationary orbit is at a distance of 35800 km above sea level, which implies a one way latency of 110 ms based off the speed of light. Since any request from a user requires a total of 2 round way trips (one for request, one for response), the minimum latency for a request is 440 ms.
Avg latency with fiber is something like 30-60 ms, so we can assume an average request with Astranis will have ~500 latency.
Most modern webpages will not be able to support such latency. Astranis will need to essentially cache webpages on demand and deliver them to the end user as a fully rendered page, which will introduce security headaches.
I don't see why Astranis chose this vs a lower orbit.
You're right. I don't really know much about the field at all. I was basing my opinions off the article. I'm still surprised they're touting geostationary satellites as superior though. The speed of light is constant. SpaceX's plan while very expensive, at least theoretically could provide a complete internet experience. Same thing with Google's balloons. Astranis at best can only serve static sites.
"no other company tries to use geostationary" - uhm, probably 85% of the satellite industry by gross revenue is based on geostationary platforms. Look at Intelsat, SES, Eutelsat, Asiasat, the various north american Ku, BSS and Ka band TV satellites, etc.
> any request from a user requires a total of 2 [sequential] round way trips (one for request, one for response)
Please elaborate? This makes no sense to me. Certainly, this is not the case for terrestrial communication (request/response latency is only 2× the one-way latency).
By "one way" the OP was referring to the ground-to-satellite trip taking 110ms. So by "round way trip" they meant a packet going from client to satellite to ground station (2 x 110ms). Thus 440ms.
https://en.wikipedia.org/wiki/Timeline_of_spaceflight
Also notable is the fact that of the 22 launches this year, three have been the debuts of entirely new vehicles, two of which were entirely privately funded. There are likely to be several more private launch vehicle debuts this year.
This is a really significant shift for what had long been a moribund industry, and a lot of complementary innovations will be able to piggyback onto this momentum.