The author admits to having zero experience with carrier-level infrastructure, but their suspicions are essentially correct.
I actually have done a fair bit of 4G and 5G specific pentesting and security research for a number of major carriers. While it varies between carriers and between product vendors, it's still an absolute horror show. Until very recently, the security was entirely achieved through obscurity. The 4G and 5G standards have started to address this, but there are still gaps big enough to be deeply concerning. I don't think it's overly hyperbolic to assume that any moderately sophisticated threat actor who wants a beachhead on a carrier can achieve it. I've demonstrated this multiple times, professionally.
IMHO the hardware vendors from a certain East Asian state have such poorly written software stacks, that they could almost be classified as APTs - security is non-existent. There are valid reasons Western countries have banned them. Western hardware vendors have significantly more mature software, but are still many years behind what most of us would consider modern security best practices.
A few years back the U.K. tried a political experiment in which it purchased Huawei equipment and also set up a special government/Huawei lab where they could analyze the source code to ensure it was safe to use. GCHQ found that the code quality made it unreviewable, and that they could not even ensure that the source code provided actually ran on the equipment (because Huawei had direct update capability.) I believe that equipment has been banned since 2020. https://www.washingtonpost.com/world/national-security/brita...
The media coverage at the time (which I followed closely because I worked in this space) indicated that the UK was under a great deal of pressure from the US to ban Huawei. The US was allegedly concerned that the use of Huawei equipment would allow US/UK shared intelligence to be eavesdropped by the CCP. The US pressure was widely viewed in the UK as having an economic purpose disguised as security.
>> The US pressure was widely viewed in the UK as having an economic purpose disguised as security.
I live in the UK. This may have been part of it, but to think that a communist dictatorship that (to pick a random example) harvests organs from political opponents is above backdooring their own kit is beyond naïve.
The ban always seemed weird to me. Not even a shred of a technical argument made it into public discourse when this was an issue. Governments just said "trust us" without giving any examples. This thread is the first time I read a hint at why that decision was made. Still, I don't know how much of this was a political stunt vs. grounded in reality. Maybe I am too jaded/cynical?
When it comes to government, it's hard to be too cynical. But in this particular case, it definitely was not a political stunt. There are a number of reasons for the limited disclosure - including NDA's signed by the governments and labs with the vendors in order to gain access to their intellectual property at a level sufficient to conduct the depth of analysis required.
I mean, it obviously did make it to the public because that WP article was written in 2019, and I remember hearing some of those details (that it wasn't so much "the code has backdoors" as "the code is so shit, it doesn't even matter if there's a backdoor in there deliberately") back then.
By the time any highly-technical topic makes it to the mainstream discourse, the details tend to get stripped out simply because none of the 70 year olds watching CNN or Fox appreciate the difference and none of the anchors or panelists know what they're talking about either.
Government secrecy when it comes to vulnerably research for foreign produced hardware is entirely understandable. I don’t need to know. You don’t want your adversaries to know how much you figured out.
Government has a mandate. That's how it can function. They have evidence, why reveal it to the public (and thus china)? I grew out of my Ron Paul phase at 15.
You trust China over your own government? Move then.
Whoa there, kiddo! I deeply, deeply resent being called 15! The tone of your comment is just wild.
I am old enough to have seen several instances where organizations had internal reasons for their decisions and chose to argue something completely different in their outward communication. Given that an exclusion of Huawei had the obvious side effect of protecting domestic markets, this leaves quite some room for doubt around this specific instance. You say it yourself that governments have mandates.
It isn't a question of trusting China more, it is about the determination of whether China or a different government is the bigger threat. If my communication gets me in trouble it is much more likely to be with my local government than the Chinese. That and the Chinese equipment probably being cheaper and better casts a lot of doubt on whether conclusions from 5-Eyes countries are in my interests.
From that perspective it makes a big difference whether the Chinese have mostly secure back-doors or their software is just generally insecure.
Does it really need a control when many countries across the globe have independently tried it out and reached the same conclusion? I would say the results are pretty clear.
Anecdotally, having done multi-year deep-dive security reviews of both Asian and Western carrier equipment (and compared notes with many colleagues working on similar efforts), there is a stark difference. It's not even close. I've focused on firmware security analysis of RAN/eNodeB/gNodeB equipment but have also done many pentests targeting core infra as well. Western nations have actually done the baseline assessment over years and years of deployment and defence - this is why we are able to see the contrast in the comparison.
The main purpose of this system was not to judge code quality (although that's a very useful side effect!) The goal was to convince politicians that they could allow the installation of cheaper telecom hardware made by a geopolitical rival, yet also protect themselves from espionage and deliberate sabotage.
Now personally I would say that this is a crazy idea from the jump, given the usual asymmetry between attackers and defenders. But even if you grant that it's possible, it requires that you begin with extremely high standards of code quality and verifiability. Those were apparently not present.
You're not thinking of the entire scope of the issue. For example, the UK cannot legislate Huawei or any other Chinese company. You might say that that's true about the US too, and to some degree you'd be correct, but this also isn't taking into account that the US is (was?) a strong ally and this provides much more leverage over the situation. It ALSO means that IF these networks are being used to spy on citizens that there's a lower worry (still a worry, but lower). It would also mean that if this data is not being shared with the UK then this would be a violation of the 5 eyes agreement, which means the UK has more leverage over that situation.
So yeah, even if they are equal, there are A LOT of reasons to spend the extra money.
As the other respondents said, it’s an issue of threat modeling. If you essentially model the origin country as your ally, you still need to worry about rogue developers and bad code quality enabling outside exploits. If you model the origin country as a potentially enemy then you need a level of assurance that is vastly higher.
Also, even if all providers provide equally crappy versions, it's still slightly more secure to prefer a vendor in your own or an allied nation. At least your interests are mildly aligned.
That's a different kind of experiment and I just got to say that there is no "one size fits all" method of experimentation. The reason there doesn't need to be a control here is because comparitors have ZERO effect on the answers being asked.
The question being tested is:
- Do Huawei devices have the capacity for adequate capacity
Not
- Are Huawei devices better or on par in terms of security compared to other vendors.
These are completely different questions with completely different methods of evaluation. And honestly, there is no control in the latter. To have a control you'd have to compare against normal operating conditions and at that point instead you really should just do a holistic analysis and provide a ranking. Which is still evaluating each vendor independently. _You don't want to compare_ until the very end. Any prior comparison is only going to affect your experiments.
tldr: most experiments don't actually need comparisons to provide adequate hypothesis answering.
That control already exists because similar levels of audits have already happened on the competition. I'm not saying the competition is a shining example of quality, it definitely isn't, but it meets a bar of some set of basic security compliance standards.
I've been personally involved in evaluating the security of a certain vendor starting with the letter H. Let us just say they are "less than honest". I had pcaps of their bullshit trying to reach out to random C2 shit on the internet, which garnered a response of "there must be a mistake, that is not our software".
Let China sell their telecom bullshit to all the poor people of the world - they will learn hard lessons.
I'm not comparing it to an OS. I'm comparing it to other competitors in the particular solution space. To answer your question: no one else's equipment behaved in that manner.
> IMHO the hardware vendors from a certain East Asian state have such poorly written software stacks, that they could almost be classified as APTs - security is non-existent.
Thank god we have the hardware and software vendors from a certain north american state, who take security very seroisly. Oh, wait ... /s
Given that Cisco has RCEs and hardcoded credential CVEs at least once every half year or so, the question does arise if our current level of audits is even remotely sufficient. And it's not Cisco alone - any major vendor of VPN or firewall or general network gear suffers from the same problem.
They are not. Cisco and literally every other major commercial IT vendor has software that can only be considered a pile of trash that is grossly and criminally inadequate against commonplace threats and attacks.
But imagine how bad your software has to be to not even be good enough to qualify as a pile of trash. Do not let terrible be the friend of bad.
One thing I absolutely don't understand about telecom security is how, in 2025, we're still using pre-shared keys in our mobile phone standards.
RSA and Diffie Hellman[1] have existed for decades, so have CA systems, yet SIM cards are still provisioned with a pre-shared key that only the card and the operator knows, and all further authentication and encryption is based on that key[2]. If the operator is ever hacked and the keys are stolen, there's nothing you can do.
To make things even worse, those keys have to be sent to the operator by the SIM card manufacturer (often a company based in a different country and hence subject to demands of foreign governments), so there are certainly opportunities to hack these companies and/or steal the keys in transit.
To me, this absolutely feels like a NOBUS vulnerability, if the SIM manufacturers and/or core network equipment vendors are in cahoots with the NSA and let the NSA take those keys, they can potentially listen in on all mobile phone traffic in the world.
[1] I'm aware that those algorithms are not considered best practices any more and that elliptic curves would be a better idea, but better RSA than what we have now.
> "AMERICAN AND BRITISH spies hacked into the internal computer network of the largest manufacturer of SIM cards in the world, stealing encryption keys used to protect the privacy of cellphone communications across the globe, according to top-secret documents provided to The Intercept by National Security Agency whistleblower Edward Snowden."
From what I remember from early 2000’s only the air interface was encrypted. Since anyway they have to provide lawful intercept capability there was not much benefit in providing end to end encryption. It’s not like it was a top of mind feature for consumers.
Blackberry's (probably) legally allowed to provide text message encryption, but telcos aren't. "Lawful intercept" (which should more accurately be called "gunpoint eavesdropping") is a legal requirement for all telcos, and the larger the telco, the more optimized and automated the process is required to be. They have to be able to read customer SMSes and tap phone calls. If the SMS happens to be gibberish, that's not their problem, but they can't make it gibberish.
In the late 90s/early 2000's, I would hear voice telephone conversations in central offices quite frequently. (Nobody was spying on purpose, or even paying much attention to what was being said. It was incidental to troubleshooting some problem report.)
This is still the case when troubleshooting POTS lines on analog PBX systems.
All you need is the probe side of a tone generator and you can listen to analog phone conversations in progress with no additional configuration or hardware.
That's done sometimes in central offices, although for analog lines a lineman's handset was the more common tool.
Digital test systems (I don't know what they use now; back then the venerable T-BERD 224 was the standard tool) can decode a single DS0 out of a larger multiplexed circuit and play the audio back and usually allow you to insert audio into a channel. That's normally what was being used to isolate a fault at one or more of the mux/demux/translation points.
Most of my telecom experiences were pretty boring. It largely consisted of handling digital circuits for modem banks, then later setting up a very small CLEC and building small PBX systems out of open source software in the early 2000s, which at the time worked about as well as you might imagine[0]. The outside plant people for the local ILEC had the best war stories:
* Someone tried to carjack a friend while he was suspended in the air in the bucket of a bucket truck, making a repair in a splice case[1].
* Another friend was making a repair in a bad part of town, and while doing some work in junction box (larger, ground-based version of a splice case,) a drug addict hobbled out of a nearby house and asked him if he was with the phone company. When he replied in the affirmative, the drug addict asked him to call 911 as one of his compatriots was ODing.
... etc...
I did get to help another service provider recover from a tornado by physically removing mud and debris from their equipment over the course of a few days and powering it back on. It almost all worked, with a few parts swapped out. I wrote about that one[2].
*Edit* I forgot I have one good CLEC war story. I wrote a test system that ended up calling 911 several times and playing a 1 kilohertz test tone at the 911 operator until they hung up. The test system was meant to troubleshoot an intermittent call quality issue that we were having difficult isolating. It consisted of a machine with a SIP trunk on one side and an analog telephone on the other. It would call itself repeatedly, play the 1k test tone to itself, and look for any audio disturbances, and record a lot of detail about which trunks were in use, etc., when that occurred. That all worked fine. The problem was the telephone number for the SIP trunk, which I remember to this day (20 years later) - 5911988. Every once and a while, when calling the SIP trunk from the analog line (this thing made thousands of calls,) the leading 5 wouldn’t get interpreted correctly, and the switch would just process the subsequent digits… 9, 1, 1 - as soon as that second one was processed, it sent the call to the local PSAP. After a few days a police officer showed up and asked us to please stop whatever it was we were doing.
0 - "not at all"
1 - in the US, anyway, these are the black cylindrical objects you see suspended from cables strung along utility poles
Some of these algorithms have to run on the SIM card, and smart cards (at least in the past) don't support RSA or (non-elliptic-curve) DH without a coprocessor that makes them more expensive.
Also, symmetric algorithms are quantum safe :)
But yes, I also wish that in 2025 we'd at least support ECC, which most smart cards should support natively at this point.
> To make things even worse, those keys have to be sent to the operator by the SIM card manufacturer (often a company based in a different country and hence subject to demands of foreign governments), so there are certainly opportunities to hack these companies and/or steal the keys in transit.
If you can't trust your SIM card vendor, you're pretty much out of luck. The attack vector for an asymmetric scheme would look a bit different, but if you can't trust the software running on them, how would you know if they were exfiltrating plaintexts through their choice of randomness for all nondeterministic algorithms?
If you have the ability to distribute keys directly, asymmetric cryptography adds complexity without much payoff. Certainly the idea that introducing RSA to a symmetrical system makes it more sound isn't well supported; the opposite is true.
The "NOBUS vulnerability" thing is especially silly, since the root of trust of all these systems are telecom providers. You don't have to ask if your American telecom provider is "in cahoots" with the US intelligence community; they are.
You appear to be neglecting the need for symmetric stream ciphers to achieve realtime communications (needed for performance reasons). No matter what you do, you are going to have a symmetric key in there somewhere for adversaries to extract. Once the adversary owns the telco, it is over (i.e., calls can be decrypted), no matter how strong the cryptography is. Your strongest cryptography cannot withstand a key leak.
Do you know how TLS works? The asymmetric keys are used to negotiate temporary symmetric keys, which are used for the actual data. That's exactly what the mentioned Diffie-Hellman algorithm does. Also check out "perfect forward secrecy".
Of course I know how TLS works, as well as PFS. I recommend Kaufman on the subject. The general scheme you refer to is known as hybrid cryptography, and the key material that is derived is used to generate symmetric keys for the TLS session (several keys, in fact, separately for confidentiality and integrity, and for duplex communications). You missed my point completely, though. Unlike TLS sessions, which rely on packets, calls are multiplexed with TDMA or CDMA, for example. Unlike TCP, these channels have realtime requirements that necessitate stream ciphers be employed. I could ask you if you know how telecom works, but that would be childish and demeaning. As ephemeral as you wish to make it, the telco must know the secret key, for imagine if the call is being relayed to Timbuktu and must be passed in plaintext.
> these channels have realtime requirements that necessitate stream ciphers be employed
Even if that were relevant (you can easily convert a block cipher to a stream cipher): It's absolutely possible to do key derivation for a symmetric stream cipher asymmetrically.
> the telco must know the secret key
No, the telco must not know the secret key if they're serious about confidentiality.
Isn't every new mobile standard effectively a complete redesign of the core network anyway?
Sure, it'll take decades to be fully rolled out, but that's true for every large-scale change. The real problem is that it's not in the interest of stakeholders to have end-to-end security.
Judging by this and your other comment, you seem to have made up your mind that the powers that be are not interested in end to end security. You seem to be ignoring (or disregarding without explanation) similar engineering feedback independently provided to you by different people. Good luck to you, sir!
By stakeholders I don't mean the telecom industry, but the governments regulating it. Lawful interception is non-negotiable, and (working) end-to-end encryption would break that, so I predict that we'll never see it on the POTS, VoIP or circuit switched. (And even OTT VoIP is under constant political attack.)
> You seem to be ignoring (or disregarding without explanation) similar engineering feedback
You mean the other "Bellhead" comments explaining why it's technically impossible to do something on the POTS that's been solved in OTT VoIP for years, like real-time end-to-end encryption using block ciphers etc.?
Yeah, I do discount confident statements declaring something technically impossible when I've been happily using such a system for the better part of a decade.
You can "easily" convert a block cipher to a stream cipher on paper (e.g., using OFB or output feedback mode), but you will not get the performance. You clearly have no working knowledge here.
I don't doubt that it's hard in existing systems, which might not have AES hardware instructions, spare processing power available etc., but my point is more that, if it were made a design goal, it would be absolutely feasible.
If we can encrypt basically every HTTP request on the Internet, surely we can encrypt a few phone calls too?
But the main problem is not technical, but that stakeholders don't want to anyway (lawful interception etc.), so presumably nothing will change.
> If we can encrypt basically every HTTP request on the Internet, surely we can encrypt a few phone calls too?
Again, you seem to not understand the performance requirements of real-time audio. The amount of data is tinry, but the latency (and particularly jitter) requirements are on a completelt different level than HTTP.
Given that Signal and WhatsApp manage it just fine even on the slowest Android smartphones made in the past decade (without hardware AES acceleration), I’d say you are vastly overestimating the computational load of symmetric encryption.
The added latency is probably undetectable, and unless the CPU is at capacity, there’s no extra jitter either.
Conversely, you might be vastly overestimating channel capacity. If ALL subscriber calls were on WhatsApp or Signal, the network would grind to a halt.
Besides making no sense (modern networks are already largely VoIP based, so what’s the difference from a capacity point of view): What does that have to do with anything discussed in this thread, i.e. the feasibility of encrypting VoIP calls?
I'm really starting to wonder: Did I unintentionally send some kind of bat signal through time, channeling Bellhead objections to the feasibility of VoIP that have been thoroughly and empirically disproven years ago when the POTS largely switched to NGN and IETF standards, and people around the world have moved on entirely to Internet-based OTT VoIP services?
I was not commenting on VoIP, it works nicely and has for a long time, in the network core too. Mobile carriers do not use VoIP with the MS, to my knowledge. There's "Wi-Fi Calling", but that is the closest you're gonna get to packetized data streams reaching your phone (it sees traction where other reception is bad and the carrier has to rely on the Internet). Your use of "Bellhead" as a derogatory term is noted, and is more reflective on you than anything else. Feel free to have the last word, though.
No, they do, exclusively. LTE and beyond don’t even support circuit switched calls anymore.
Bellhead wasn’t intended in a derogatory way, just as a reference to the “Netheads vs. Bellheads” schools of thinking about networks.
I do have great respect for historical phone systems and the clever engineers making them work. In terms of absolute reliability, I think VoIP was indeed a step back (although I think that’s mainly due to modern engineering and QA practices than inherent limitations).
Exclusively? You make it sound like VoLTE is mandatory. That is not the case, to my knowledge. On a 4G network, for example, one does not always have VoLTE available, and yet one is always able to place voice calls. Since your conviction is palpable, if you could please provide a reference then that would help further the discussion. If not, then no worries, will find the information on my own.
Not without redesign. I am telling you that whatever key exchange you run, it will result in key material that is accessible by the telco and therefore by your adversary (e.g., PRC). This is true even if you deployed authenticated Diffie-Hellman between endpoints. You might be able to do secure VoIP on top of that, but you cannot use existing telco infrastructure for your calls without expecting the tower to be able to decrypt the call. The ability of the telco to decrypt the call is the very basis of CALEA and LI, or lawful interception modules, and the reason why Salt Typhoon works.
The point is to limit the damage of a key leak, not eliminate it. Limiting the scope of a compromise to a single connection rather than all communications for the past and future is an improvement.
And yeah, of course we're talking about a redesign. If we were content with the status quo why would we be here?
Yes, but that "somewhere" could very well be only the two phones involved in a call, with key establishment happening via Diffie-Hellman. Doesn't protect against an active attack, but there's no key to leak inside the network.
After seeing STIR/SHAKEN's implementation details (hey what if we used JWT, and then maximized the metadata leakage of who you're calling), I really do not want to trust telecoms to roll their own crypto.
> To me, this absolutely feels like a NOBUS vulnerability, if the SIM manufacturers and/or core network equipment vendors are in cahoots with the NSA and let the NSA take those keys, they can potentially listen in on all mobile phone traffic in the world.
This feels like the obligatory XKCD comic[1] when in reality there isn't any secretive key extraction or cracking...things are just sent unencrypted from deeper into the network to the three-letter-agencies. Telco's are well known to have interconnect rooms with agencies.
Not a requirement, but if for some reason you don't do the Right Thing that the NSA wants, oh dear your CEO goes to jail, he was a bad boy, look at all that insider trading. You'll do the Right Thing next time we ask, capiche?
There are also endless ramblings of some german blogger about how he has been sabotaged at the University of Karlsruhe, regarding very early development of encrypted digital telephony and data-transfer in the 80ies/90ies, by very incompetent and corrupted professors, connected to this.
Are you suggesting end-to-end encryption? Telecom providers have to implement "lawful intercept" interfaces to comply with the law in many jurisdictions.
I think they're just suggesting improvements on device-to-network encryption. Requiring the sim card secret to live on the sim card and the network means it needs to be transmitted from manufacturing to the network, which increases exposure.
If it were a public/private key pair, and you could generate it on the sim card during manufacturing, the private key would never need to be anywhere but the sim card. Maybe that's infeasible because of seeding, but even if the private key was generated on the manufacturing/programming equipment and stored on the sim card, it wouldn't need to be stored or transmitted and could only be intercepted while the equipment was actively compromised.
This really is the least concern in the entire mess that is phone network security. (Credit and debit card issuers have the same key distribution and derivation problem, but it's ~fine, and there are robust standard solutions, such as deriving per-card keys at the personalization site using tamper-proof HSMs.)
Even if SIM cards were to feature an asymmetric private key: What would you do with it? How would you look it up, and what would you use it for? There is simply no provision for end-to-end encryption in the phone network at the moment.
If there were, it would be a different story, of course, but I doubt that will ever happen.
As part of lawful intercept, they can't encrypt the traffic and then send the NSA the encrypted traffic. They have to send the unencrypted traffic. Or they go to jail.
you've missed the point. if it is e2ee, then there's nothing but noise going down that lawful intercept. the ISP upheld their obligation, yet nosy bitches get nothing.
okay. so let me break it down further. you and i exchange messages via e2ee app. i text you, the app encrypts it, then sends it down the wire. the TLA lawful intercepts that data, but it is just random noise because it is encrypted. your app finally receives the e2ee data, decrypts it, shows you the message.
the data in transit is encrypted beyond anything the ISP has control over, so if the ISP provides lawful intercept they have fulfilled their obligation to the TLA because they let them see the data. it's not the ISP's fault that you and I encrypted the data. this isn't TLS encryption.
if that's not clear enough, then someone else will have to step in as I have taken as far as I can
yeah? and? so? who does that? if you're concerned about being intercepted and are still using land lines, then you're really not concerned. we learned that in the 80s. if you're using SMS, you're also not really concerned. friends don't let friends use unencrypted.
I exclusively call landlines from my phone, at least using "actual" phone calls; businesses, to be precise.
For all person-to-person calls, my family and friends have long switched to FaceTime and WhatsApp, which are both encrypted. Why would I pay per minute for a less secure and lower fidelity (HD voice usually does not work internationally) channel?
That said, I really would prefer if the POTS were better secured, given that SSNs and payment card numbers are transmitted over it all the time.
I worked for a major telco in technical support/customer service.
I saw numerous security issues, and when I brought them up, with solutions to improve the service for customers, I was informed the the company would lose money.
Scammers are big customers for telcos, and when they get caught, and banned, they come back again and pay a new connection fee and start the cycle again. Scammers also enable feature upselling, another way to profit from not solving the problem.
To be honest, the conclusion of the blog post that Freeswitch are not budging from their community release schedule does not surpise me one iota.
Freeswitch used to have a strong community spirit.
Things all changed since they took a more agressive commercial turn, a couple of years ago IIRC.
Since that point you now have to jump through the "register" hoop to gain access to stuff that should be open (I can't remember what it is, IIRC something like the APT repos being hidden behind a "register" wall, something like that).
I don't want to "register" with a commercial company to gain access to the foss community content. Because we all know what happens in the tech world if you give your details to a commercial company, the salesdroids start pestering you for an upsell/cross-sell, you get put in mailing lists you never asked to be put on, etc.
In Signalwire’s defence, reading through the old mailing list, I got the feeling they drove the development of Freeswitch for years without being properly compensated by downstream projects. Sadly I’ve also seen other parts of the Voip community recalibrate their generosity when it comes to open source and I honestly can’t blame them.
The team behind Matrix.org talked about a similar problem in one of their FOSDEM’25 talks: commercial vendors free loading on development.
I think it's fair to assume that between foreign threat actors, the Five Eyes/other Western pacts, and the demand to make the line go up, there's no real anonymity online. If they want you, they've got the means to get you.
In reality that's really no different than the pre-internet age. If you don't want your stuff intercepted, you need to encrypt it by means that aren't trivial to access electronically for a major security apparatus. Physical notes, word-of-mouth, hand signals, etc.
Also, you need to be ready for the consequences of what you say and do online should a state actor decide to allocate the resources to actually act upon the data they have.
From the article I am not totally convinced that "Telecom security sucks today", given they just randomly picked Freeswitch to find a buffer overflow. "Telecom stacks" might or might be not insecure but what's done here is very weak evidence. The Salt Typhoon attacks allegedly exploited a Cisco vulnerability, although the analysts suggest the attackers have been using proper credentials (https://cyberscoop.com/cisco-talos-salt-typhoon-initial-acce...) So nothing to do with Freeswitch or anything.
Cisco Unified Call Manager almost certainly has vulnerabilities, as does Metaswitch which has shambled along in network cores after Microsoft publicly murdered it, Oracle SBC is often wonky just doing the basics, whatever shambling mess Teams is shipping this week for their TRouter implementation definitely has Denial of Service bugs that I can't properly isolate.
Lets not even talk about the mess of MF Tandems or almost every carrier barebacking the web by slinging raw unencrypted UDP SIP traffic over the internet...
It is possible to build secure systems in this space, but instead we have almost every major telecom carrier running proprietary unmodifiable platforms from long dead companies or projects (Nortel, Metaswitch,etc) and piles of technical debt that are generally worse than the horribly dated and unpatched equipment that comprises their networks.
I find it absolutely insane that the industry standard for SIP trunks is unencrypted UDP, usually using IP-based authentication.
When I asked a popular VoIP carrier about this a while back, they argued that unencrypted connections were fine because the PSTN doesn't offer any encryption and they didn't want to give their customers a false sense of security. While technically true, this doesn't mean we shouldn't at least try to implement basic security where we can - especially for traffic sent over the public Internet.
My DOCSIS service provider turned off encryption. That's likely due to the certificate expiring on a popular modem brand. Key management is hard, certificate management is hard. Especially when they don't care about security. The encryption was only DES to begin with instead of AES which is supported in DOCSIS but few service providers bother.
Anyone who has the tools to sniff DOCSIS can eavesdrop on my provider's nodes and hear the incoming leg of phone calls.
It'd be lovely to see some nations of the world pour some serious money into the various Linux Foundation (or other open source) telco & cellular projects.
Pouring money is not how you get good quality software. You need a company driving product quality. Most Linux foundation projects have companies heavily invested in productionizing the projects and that leads to them contributing to them to ensure high quality code. Code without a driving product tends to wander aimlessly.
Maybe the money should have more strings attached, be attached to grant proposals, whatever.
I don't see that that is an important or clarifying distinction. Governments should be directly helping, with money, somehow. Collectivizing the investment is better returns and far better outcomes, open source is the only way you're going to avoid risking your investment in a single company that may over time fail. Having your nation take its infrastructure seriously should be obvious, and this is how. And I disagree that good things only happen at companies. The post I was responding to stands as incredibly broadscale evidence that that often doesn't happen.
Linux foundation is the thing financing backdoors. do not confuse it with Linux. the only money from the foundation that goes to actual Linux are a couple build servers. and one event sponsorship. absolutely nothing else.
I worked with telecom code. It's code that parses complicated network protocols with several generations of legacy, often written in secrecy (security by obscurity), and often in C/C++.
Yep. And the network appliance world also tried to make that a "feature", by making things like "management VLANs" and pretending that you don't need to be secure because of it.
I don't doubt that this cruft is insecure. It's just a bit of a stretch to get to that conclusion from finding a potential buffer overflow in Freeswitch. Maybe it's not a stretch but just a conclusion by analogy but then you might just say "all software is insecure".
I've had a few conversations with [security nerds more familiar with telecom] since SignalWire broke embargo.
The "everything sucks and there's no motive to fix it" was a synopsis because, frankly, those conversations get really hard to follow if you don't know the jargon. And I didn't feel like trying to explain it poorly (since I don't understand the space that well, myself), so I left it at what I wrote.
(I didn't expect Hacker News to notice my blog at all.)
As security nerd working within telecom agreed. Nobody really cares about security issues. And when people already struggle to care about the issues it gets even worse when fixing some of the issues (such as SS7 vulns) requires coordination with telcos around the world. cape[1] at least seems like its a breath of fresh air within the space.
I'll have to try to find a video of the HOPE presentation where I first heard about SS7 and how riddled it was with known vulnerabilities, my jaw hit the floor.
Can confirm. It’s not even nonchalance, but outright hostility to security because that sounds like work and change. And if there’s anyone who hates change, it’s telcom. They still resent having to learn voip and it could have kids in college at this point.
> (I didn't expect Hacker News to notice my blog at all.)
Your blog actually gets posted somewhat regularly [0]. I actually remembered it, because it’s one of the rare cases where I like the "cute" illustrations.
This blog article is a combination of "I did a thing I do for a living" plus "recipient of my report does not share my (and the rest of the Security Industry) values", and concludes that Telecom security sucks.
It's very nice that the author spent their free time looking at code, found a bug and reported it -- I don't want to discourage that at all, that's great. But the fact that one maintainer of one piece of software didn't bow and scrape to the author and follow Security Industry Best Practises, is not a strong basis for opining that "Telecom security sucks today" (even if it does)
If someone came to you with a bug in your code, and they didn't claim it was being actively exploited, and they didn't offer a PoC to confirm it could be exploited... why shouldn't you just treat it as a regular bug? Fix it now, and it'll be in the next release. What's that? People can see the changes? Well yes, they can see all the other changes too. Good luck to them finding an exploit, you didn't.
The same thing happens in Linux distros. A security bug gets reported. Sometimes, the upstream author is literally dead, not just intransigent. If you want change on your own timeline, make your own releases.
One area where freeswitch is probably used quite often (and without support contract) are BigBlueButton installations (virtual classroom system) in schools and universities. I am more worried about them then about telcos.
Yeah, it's enabled with `load mod_xml_rpc`. Listening on 8080.
$ ./test3 # see above
<HTML><HEAD><TITLE>Error 408</TITLE></HEAD><BODY><H1>Error 408</H1><P>Problem getting the request header</P><p><HR><b><i><a href="http://xmlrpc-c.sourceforge.net">ABYSS Web Server for XML-RPC For C/C++</a></i></b> version 1.26.0<br></p></BODY></HTML>
From the article.
"This is not typically a problem, since most browsers don’t support URLs longer than 2048 characters, but the relevant RFCs support up to about 8 KB in most cases. (CloudFlare supports up to 32KB.)"
So obviously relying on browsers is not enough, but a nitpick. The article links to a stackoverflow which actually notes browsers support a lot more.
Browser Address bar document.location or anchor tag
------------------------------------------
Chrome 32779 >64k
Android 8192 >64k
Firefox >300k >300k
Safari >64k >64k
IE11 2047 5120
Edge 16 2047 10240
It's old but there's no reason to believe things have improved as there are zero incentives to. Also, software security vulnerabilities are only part of the problem - the other part is that telcos willingly outsource control and critical access to the lowest bidder: https://berthub.eu/articles/posts/5g-elephant-in-the-room/
2G GSM piggybacked its wired backed on the ISDN telecom standard (which is which your phone number is called a MSISDN).
Today's CAMEL (MAP and CAP) signalling is an evolution of the ISDN signalling which traces it roots back into (amongst others) the SINAP signalling protocol and the SS7 network stack from even before that.
SS7 is early 1970s stuff. From a more innocent time.
Motorola's low-end 911 phone system, Emergency CallWorks (ECW), is Asterisk with proprietary modules running on Linux under Proxmox. Granted, Motorola is killing the produce, but it's out there. The one I babysit is heavily firewalled but I'd imagine not all of them are.
That is not the core, however; the core means the central pieces of a large telecom, the part that handles all the needed data to set up say, 10,000 or more calls per second.
For sure. The implicit trust that participants on the PSTN appear to give to each other, imparts a certain amount of undue influence to the constellation of dodgy systems interconnected to it.
No, but plenty of businesses that process your call data, whether it's for call recording, transcription, IVRs, speech analytics, CRM integration, call queuing, auto dialing, or SMS/chat features, are liable to be running stuff like FreeSWITCH, Asterisk, or similar somewhere in their stack.
Any business with a PBX that wants to do more than just basic call routing and PSTN connectivity is likely using third party tools. And a significant number of those tools are built on FreeSWITCH, Asterisk, or similar.
I've been beating the drum about this to everyone who will listen lately, but I'll beat it here too! Why don't we use seL4 for everything? People are talking about moving to a smart grid, having IoT devices everywhere, putting chips inside of peoples' brains (!!!), cars connect to the internet, etc.
Anyway, it's insane that we have a mathematically-proven secure kernel, we should use it! Surely there's a startup in this somewhere..
Almost all vulnerabilities are in apps and libraries which seL4 does little or nothing to solve. The only solution is secure coding across the entire stack which will reveal that much of the existing code is so low-quality that it just has to be thrown away and rewritten.
I imagine most of the people running Freeswitch have their own patches on top of the community releases anyway so we're compiling those security fixes in to our own builds. That's what we did anyway when I worked for a place using Asterisk, Freeswitch, and OpenSER/Kamailio whatever it is called this decade.
"potentially thousands of telecom stacks around the world that SignalWire has decided to keep vulnerable until the Summer, even after they published the patches on GitHub."
I would dare to whisper that the lack of security suits the NSA just fine. However you can add just about every technically competent nation state, organised crime, major corporations, and a collection of non-state actors. About the only group besides us nomies who I think might really care about this are the payment rails folks as this insecurity facilitates more fraud.
I have gone on about this before but most carriers have a psychological aversion to security, and most of their vendors adopt the same.
They see themselves as the wire, and thus completely incapable of being targeted by hostile third parties.
Non exhaustive list of problems I have seen:
Credit cards stored in plaintext on the carriers wordpress website.
esxi and drac ports publicly available to the internet, not patched.
inbound authentication not dropped by core infrastructure, log files just filling up with brute force attempts (often successful)
Software vendors not implementing carrier network standards and telling everyone they know better.
tech support opening socks proxy ports for technical support reasons and then leaving them open, where they get abused for netflix traffic.
Field techs running around with core infrastructure passwords written on their paperwork
Vulnerable hardware remaining unpatched and available to the internet for years - particularly fortigate stuff.
Technicians building unencrypted pptp vpns on client infrastructure and leaving them open for years.
It doesnt surprise me that freepbx/asterisk etc are full of issues. They only get yelled at when they push a change that knocks some eccentric sip config offline, no one cares if they maintain vulnerable code as long as it works. Doubly so because theres a cottage industry in locating and using vulnerable SIP credentials for fraudulent phone calls.
1) To be slightly annoyingly contrarian, there is money to be made in secure telecom; Skype founders made a bundle, no?
2) This article conflates freeswitch with major telecom carrier infeastructure. My impression is that 30+% of the problem with security is not technical but economic. Carriers outsource a ton of their operations, effectively outsourcing most efforts to care about security... which never helps security posture unless the outsourcer considers their core value proposition, which they generally don't, instead pushing themselves as a cost/capitalization play.
In a past life I worked at AWS as a support engineer
I once got a ticket from T-Mobile (US) asking what "AWS's best practices were around security patching. How long should we wait?"
A week later they admitted to an enormous data breach
I'd say I switched phone carriers after that, but after working in the ISP market I already knew they were all absolute clown shows where all the money only went to C-levels and not infrastructure or security
Rust only fixes the memory safety issues. It doesn’t fix bad software design, the problem where we have to trust other companies to keep their security issues under control (eg. Cisco), and it can’t undo bad decisions that have become industry standards (eg. SS7)
Yes, given the SS7 design started in the 1970s when telecommunications was either the purview of a government agency or a state granted monopoly (depending on where you were in the world. in which case it is perfectly rational to assume that your counterparties are trusted.
Allowing any random bozo to connect to the network's trusted center was a bad decision.
If the regulatory mandate to allow interconnection had also mandated the development and usage of a secure protocol for that interconnection, we'd be fine. But it mandated the opposite. Politicians got us into this mess, not programmers.
I would argue it’s the managers of the programmers who failed to foresee this as a future requirement, hence they didn’t tell the programmers to make it resilient to reasonably foreseeable changes to the operating environment.
It was not reasonably foreseeable. The Bell system had been a government-blessed monopoly since its inception. Pigs would fly before scammers were allowed to connect to raw SS7.
I don't have a lock on my mailbox. It is bad that the "low trust" internet overflows into my everyday life. I would rather that there was some separation of telephone calls, local community and banking etc from the lawless voids, than normalizing all these scams.
Telephone scam calls are mostly an internet problem.
I don’t get how your anecdote relates to SS7. SS7 is available country-wide (I’m assuming it doesn’t directly cross national borders) and the surface area of all of the cell towers and data centers they connect to is very large. Even larger if you consider all of the software that runs on the devices that are legitimately connected to that network. This isn’t even remotely comparable to some fictional high trust small rural town where everyone knows everyone.
I do have a lock on my mailbox, but it has to adhere to the USPS skeleton keys (which have been leaked and are exploited by thieves). Another example of bad design, or at least design that wasn’t able to withstand reasonably foreseeable changes to the operating environment.
You probably don't need an LLM to find vulnerabilities in software written like this. It took me a few minutes with GitHub in a web browser, but I'm sure you could make some headway with semgrep if you were bold enough.
The author admits to having zero experience with carrier-level infrastructure, but their suspicions are essentially correct. I actually have done a fair bit of 4G and 5G specific pentesting and security research for a number of major carriers. While it varies between carriers and between product vendors, it's still an absolute horror show. Until very recently, the security was entirely achieved through obscurity. The 4G and 5G standards have started to address this, but there are still gaps big enough to be deeply concerning. I don't think it's overly hyperbolic to assume that any moderately sophisticated threat actor who wants a beachhead on a carrier can achieve it. I've demonstrated this multiple times, professionally. IMHO the hardware vendors from a certain East Asian state have such poorly written software stacks, that they could almost be classified as APTs - security is non-existent. There are valid reasons Western countries have banned them. Western hardware vendors have significantly more mature software, but are still many years behind what most of us would consider modern security best practices.
A few years back the U.K. tried a political experiment in which it purchased Huawei equipment and also set up a special government/Huawei lab where they could analyze the source code to ensure it was safe to use. GCHQ found that the code quality made it unreviewable, and that they could not even ensure that the source code provided actually ran on the equipment (because Huawei had direct update capability.) I believe that equipment has been banned since 2020. https://www.washingtonpost.com/world/national-security/brita...
The media coverage at the time (which I followed closely because I worked in this space) indicated that the UK was under a great deal of pressure from the US to ban Huawei. The US was allegedly concerned that the use of Huawei equipment would allow US/UK shared intelligence to be eavesdropped by the CCP. The US pressure was widely viewed in the UK as having an economic purpose disguised as security.
A quick search found: https://www.euractiv.com/section/politics/short_news/uk-bann...
>> The US pressure was widely viewed in the UK as having an economic purpose disguised as security.
I live in the UK. This may have been part of it, but to think that a communist dictatorship that (to pick a random example) harvests organs from political opponents is above backdooring their own kit is beyond naïve.
Correct. Both the US and Canada also did similar investigations and came to similar conclusions.
The ban always seemed weird to me. Not even a shred of a technical argument made it into public discourse when this was an issue. Governments just said "trust us" without giving any examples. This thread is the first time I read a hint at why that decision was made. Still, I don't know how much of this was a political stunt vs. grounded in reality. Maybe I am too jaded/cynical?
When it comes to government, it's hard to be too cynical. But in this particular case, it definitely was not a political stunt. There are a number of reasons for the limited disclosure - including NDA's signed by the governments and labs with the vendors in order to gain access to their intellectual property at a level sufficient to conduct the depth of analysis required.
I mean, it obviously did make it to the public because that WP article was written in 2019, and I remember hearing some of those details (that it wasn't so much "the code has backdoors" as "the code is so shit, it doesn't even matter if there's a backdoor in there deliberately") back then.
By the time any highly-technical topic makes it to the mainstream discourse, the details tend to get stripped out simply because none of the 70 year olds watching CNN or Fox appreciate the difference and none of the anchors or panelists know what they're talking about either.
A song parody comes to mind (HN strips the music emoji)
Government secrecy when it comes to vulnerably research for foreign produced hardware is entirely understandable. I don’t need to know. You don’t want your adversaries to know how much you figured out.
US does the same thing with Cisco servers. (source : as per Glenn Greenwald tweet)
Government has a mandate. That's how it can function. They have evidence, why reveal it to the public (and thus china)? I grew out of my Ron Paul phase at 15.
You trust China over your own government? Move then.
Whoa there, kiddo! I deeply, deeply resent being called 15! The tone of your comment is just wild.
I am old enough to have seen several instances where organizations had internal reasons for their decisions and chose to argue something completely different in their outward communication. Given that an exclusion of Huawei had the obvious side effect of protecting domestic markets, this leaves quite some room for doubt around this specific instance. You say it yourself that governments have mandates.
It isn't a question of trusting China more, it is about the determination of whether China or a different government is the bigger threat. If my communication gets me in trouble it is much more likely to be with my local government than the Chinese. That and the Chinese equipment probably being cheaper and better casts a lot of doubt on whether conclusions from 5-Eyes countries are in my interests.
From that perspective it makes a big difference whether the Chinese have mostly secure back-doors or their software is just generally insecure.
That kind of analysis needs a control.
But I guess it’s like you said: a political experiment.
Does it really need a control when many countries across the globe have independently tried it out and reached the same conclusion? I would say the results are pretty clear.
The purpose of the control would be to establish whether the competition is actually any better.
Anecdotally, having done multi-year deep-dive security reviews of both Asian and Western carrier equipment (and compared notes with many colleagues working on similar efforts), there is a stark difference. It's not even close. I've focused on firmware security analysis of RAN/eNodeB/gNodeB equipment but have also done many pentests targeting core infra as well. Western nations have actually done the baseline assessment over years and years of deployment and defence - this is why we are able to see the contrast in the comparison.
The main purpose of this system was not to judge code quality (although that's a very useful side effect!) The goal was to convince politicians that they could allow the installation of cheaper telecom hardware made by a geopolitical rival, yet also protect themselves from espionage and deliberate sabotage.
Now personally I would say that this is a crazy idea from the jump, given the usual asymmetry between attackers and defenders. But even if you grant that it's possible, it requires that you begin with extremely high standards of code quality and verifiability. Those were apparently not present.
But if they're no worse than the alternative, there's no point in spending the extra money.
You're not thinking of the entire scope of the issue. For example, the UK cannot legislate Huawei or any other Chinese company. You might say that that's true about the US too, and to some degree you'd be correct, but this also isn't taking into account that the US is (was?) a strong ally and this provides much more leverage over the situation. It ALSO means that IF these networks are being used to spy on citizens that there's a lower worry (still a worry, but lower). It would also mean that if this data is not being shared with the UK then this would be a violation of the 5 eyes agreement, which means the UK has more leverage over that situation.
So yeah, even if they are equal, there are A LOT of reasons to spend the extra money.
As the other respondents said, it’s an issue of threat modeling. If you essentially model the origin country as your ally, you still need to worry about rogue developers and bad code quality enabling outside exploits. If you model the origin country as a potentially enemy then you need a level of assurance that is vastly higher.
But they are worse. Massively so.
Also, even if all providers provide equally crappy versions, it's still slightly more secure to prefer a vendor in your own or an allied nation. At least your interests are mildly aligned.
But really, they are massively worse.
That's a different kind of experiment and I just got to say that there is no "one size fits all" method of experimentation. The reason there doesn't need to be a control here is because comparitors have ZERO effect on the answers being asked.
These are completely different questions with completely different methods of evaluation. And honestly, there is no control in the latter. To have a control you'd have to compare against normal operating conditions and at that point instead you really should just do a holistic analysis and provide a ranking. Which is still evaluating each vendor independently. _You don't want to compare_ until the very end. Any prior comparison is only going to affect your experiments.tldr: most experiments don't actually need comparisons to provide adequate hypothesis answering.
That control already exists because similar levels of audits have already happened on the competition. I'm not saying the competition is a shining example of quality, it definitely isn't, but it meets a bar of some set of basic security compliance standards.
I've been personally involved in evaluating the security of a certain vendor starting with the letter H. Let us just say they are "less than honest". I had pcaps of their bullshit trying to reach out to random C2 shit on the internet, which garnered a response of "there must be a mistake, that is not our software".
Let China sell their telecom bullshit to all the poor people of the world - they will learn hard lessons.
Does it send more data to more endpoints than US-made Windows OS (I wiresharked it in a VM so I know)?
I'm not comparing it to an OS. I'm comparing it to other competitors in the particular solution space. To answer your question: no one else's equipment behaved in that manner.
> To answer your question
Maybe I'm being pedantic, but that doesn't answer their question.
Was this for phones or home routers?
[dead]
Just curious is there write ups on certain devices? Would love to buy one from Ali express and look into this.
Is this a good starting point?
https://vulners.com/search/types/huawei
How does a regular hacker get their hands on this kind of equipment to do research?
eBay, Alibaba, various grey-market sellers. There's no shortage of availability if you know what to look for.
> IMHO the hardware vendors from a certain East Asian state have such poorly written software stacks, that they could almost be classified as APTs - security is non-existent.
Thank god we have the hardware and software vendors from a certain north american state, who take security very seroisly. Oh, wait ... /s
At least those can currently pass industry level audits.
Given that Cisco has RCEs and hardcoded credential CVEs at least once every half year or so, the question does arise if our current level of audits is even remotely sufficient. And it's not Cisco alone - any major vendor of VPN or firewall or general network gear suffers from the same problem.
They are not. Cisco and literally every other major commercial IT vendor has software that can only be considered a pile of trash that is grossly and criminally inadequate against commonplace threats and attacks.
But imagine how bad your software has to be to not even be good enough to qualify as a pile of trash. Do not let terrible be the friend of bad.
One thing I absolutely don't understand about telecom security is how, in 2025, we're still using pre-shared keys in our mobile phone standards.
RSA and Diffie Hellman[1] have existed for decades, so have CA systems, yet SIM cards are still provisioned with a pre-shared key that only the card and the operator knows, and all further authentication and encryption is based on that key[2]. If the operator is ever hacked and the keys are stolen, there's nothing you can do.
To make things even worse, those keys have to be sent to the operator by the SIM card manufacturer (often a company based in a different country and hence subject to demands of foreign governments), so there are certainly opportunities to hack these companies and/or steal the keys in transit.
To me, this absolutely feels like a NOBUS vulnerability, if the SIM manufacturers and/or core network equipment vendors are in cahoots with the NSA and let the NSA take those keys, they can potentially listen in on all mobile phone traffic in the world.
[1] I'm aware that those algorithms are not considered best practices any more and that elliptic curves would be a better idea, but better RSA than what we have now.
[2] https://nickvsnetworking.com/hss-usim-authentication-in-lte-...
Gemalto was hacked 15 years ago:
> "AMERICAN AND BRITISH spies hacked into the internal computer network of the largest manufacturer of SIM cards in the world, stealing encryption keys used to protect the privacy of cellphone communications across the globe, according to top-secret documents provided to The Intercept by National Security Agency whistleblower Edward Snowden."
https://theintercept.com/2015/02/19/great-sim-heist/
I have talked to telephone engineers and they said they could read all passing SMSs verbatim when they hooked up to a cell tower to debug stuff.
Dunno if that is still the case though. However, cell phones as secure communication did not use to be the case.
You would probably want to communicate with encrypted data traffic device to device.
From what I remember from early 2000’s only the air interface was encrypted. Since anyway they have to provide lawful intercept capability there was not much benefit in providing end to end encryption. It’s not like it was a top of mind feature for consumers.
> It’s not like it was a top of mind feature for consumers.
BlackBerry got some market share for promoting their encryption.
Of course the encryption was complete junk, possibly worse than junk because of the false sense of security, unless you were an enterprise customer.
https://www.theregister.com/2016/04/15/canada_blackberry_wat...
Blackberry's (probably) legally allowed to provide text message encryption, but telcos aren't. "Lawful intercept" (which should more accurately be called "gunpoint eavesdropping") is a legal requirement for all telcos, and the larger the telco, the more optimized and automated the process is required to be. They have to be able to read customer SMSes and tap phone calls. If the SMS happens to be gibberish, that's not their problem, but they can't make it gibberish.
In the late 90s/early 2000's, I would hear voice telephone conversations in central offices quite frequently. (Nobody was spying on purpose, or even paying much attention to what was being said. It was incidental to troubleshooting some problem report.)
This is still the case when troubleshooting POTS lines on analog PBX systems.
All you need is the probe side of a tone generator and you can listen to analog phone conversations in progress with no additional configuration or hardware.
That's done sometimes in central offices, although for analog lines a lineman's handset was the more common tool.
Digital test systems (I don't know what they use now; back then the venerable T-BERD 224 was the standard tool) can decode a single DS0 out of a larger multiplexed circuit and play the audio back and usually allow you to insert audio into a channel. That's normally what was being used to isolate a fault at one or more of the mux/demux/translation points.
I'll bet you've got some other great war stories, too.
Most of my telecom experiences were pretty boring. It largely consisted of handling digital circuits for modem banks, then later setting up a very small CLEC and building small PBX systems out of open source software in the early 2000s, which at the time worked about as well as you might imagine[0]. The outside plant people for the local ILEC had the best war stories:
* Someone tried to carjack a friend while he was suspended in the air in the bucket of a bucket truck, making a repair in a splice case[1].
* Another friend was making a repair in a bad part of town, and while doing some work in junction box (larger, ground-based version of a splice case,) a drug addict hobbled out of a nearby house and asked him if he was with the phone company. When he replied in the affirmative, the drug addict asked him to call 911 as one of his compatriots was ODing.
... etc...
I did get to help another service provider recover from a tornado by physically removing mud and debris from their equipment over the course of a few days and powering it back on. It almost all worked, with a few parts swapped out. I wrote about that one[2].
*Edit* I forgot I have one good CLEC war story. I wrote a test system that ended up calling 911 several times and playing a 1 kilohertz test tone at the 911 operator until they hung up. The test system was meant to troubleshoot an intermittent call quality issue that we were having difficult isolating. It consisted of a machine with a SIP trunk on one side and an analog telephone on the other. It would call itself repeatedly, play the 1k test tone to itself, and look for any audio disturbances, and record a lot of detail about which trunks were in use, etc., when that occurred. That all worked fine. The problem was the telephone number for the SIP trunk, which I remember to this day (20 years later) - 5911988. Every once and a while, when calling the SIP trunk from the analog line (this thing made thousands of calls,) the leading 5 wouldn’t get interpreted correctly, and the switch would just process the subsequent digits… 9, 1, 1 - as soon as that second one was processed, it sent the call to the local PSAP. After a few days a police officer showed up and asked us to please stop whatever it was we were doing.
0 - "not at all"
1 - in the US, anyway, these are the black cylindrical objects you see suspended from cables strung along utility poles
2 - https://marcusb.org/posts/2023/11/a-real-life-disaster-recov...
Not only they can read, they probably record them because SMS don't use much space.
Some of these algorithms have to run on the SIM card, and smart cards (at least in the past) don't support RSA or (non-elliptic-curve) DH without a coprocessor that makes them more expensive.
Also, symmetric algorithms are quantum safe :)
But yes, I also wish that in 2025 we'd at least support ECC, which most smart cards should support natively at this point.
> To make things even worse, those keys have to be sent to the operator by the SIM card manufacturer (often a company based in a different country and hence subject to demands of foreign governments), so there are certainly opportunities to hack these companies and/or steal the keys in transit.
If you can't trust your SIM card vendor, you're pretty much out of luck. The attack vector for an asymmetric scheme would look a bit different, but if you can't trust the software running on them, how would you know if they were exfiltrating plaintexts through their choice of randomness for all nondeterministic algorithms?
If you have the ability to distribute keys directly, asymmetric cryptography adds complexity without much payoff. Certainly the idea that introducing RSA to a symmetrical system makes it more sound isn't well supported; the opposite is true.
The "NOBUS vulnerability" thing is especially silly, since the root of trust of all these systems are telecom providers. You don't have to ask if your American telecom provider is "in cahoots" with the US intelligence community; they are.
You appear to be neglecting the need for symmetric stream ciphers to achieve realtime communications (needed for performance reasons). No matter what you do, you are going to have a symmetric key in there somewhere for adversaries to extract. Once the adversary owns the telco, it is over (i.e., calls can be decrypted), no matter how strong the cryptography is. Your strongest cryptography cannot withstand a key leak.
Do you know how TLS works? The asymmetric keys are used to negotiate temporary symmetric keys, which are used for the actual data. That's exactly what the mentioned Diffie-Hellman algorithm does. Also check out "perfect forward secrecy".
Of course I know how TLS works, as well as PFS. I recommend Kaufman on the subject. The general scheme you refer to is known as hybrid cryptography, and the key material that is derived is used to generate symmetric keys for the TLS session (several keys, in fact, separately for confidentiality and integrity, and for duplex communications). You missed my point completely, though. Unlike TLS sessions, which rely on packets, calls are multiplexed with TDMA or CDMA, for example. Unlike TCP, these channels have realtime requirements that necessitate stream ciphers be employed. I could ask you if you know how telecom works, but that would be childish and demeaning. As ephemeral as you wish to make it, the telco must know the secret key, for imagine if the call is being relayed to Timbuktu and must be passed in plaintext.
> these channels have realtime requirements that necessitate stream ciphers be employed
Even if that were relevant (you can easily convert a block cipher to a stream cipher): It's absolutely possible to do key derivation for a symmetric stream cipher asymmetrically.
> the telco must know the secret key
No, the telco must not know the secret key if they're serious about confidentiality.
Right, let's redesign telecom infrastructure...
Isn't every new mobile standard effectively a complete redesign of the core network anyway?
Sure, it'll take decades to be fully rolled out, but that's true for every large-scale change. The real problem is that it's not in the interest of stakeholders to have end-to-end security.
Judging by this and your other comment, you seem to have made up your mind that the powers that be are not interested in end to end security. You seem to be ignoring (or disregarding without explanation) similar engineering feedback independently provided to you by different people. Good luck to you, sir!
By stakeholders I don't mean the telecom industry, but the governments regulating it. Lawful interception is non-negotiable, and (working) end-to-end encryption would break that, so I predict that we'll never see it on the POTS, VoIP or circuit switched. (And even OTT VoIP is under constant political attack.)
> You seem to be ignoring (or disregarding without explanation) similar engineering feedback
You mean the other "Bellhead" comments explaining why it's technically impossible to do something on the POTS that's been solved in OTT VoIP for years, like real-time end-to-end encryption using block ciphers etc.?
Yeah, I do discount confident statements declaring something technically impossible when I've been happily using such a system for the better part of a decade.
You can "easily" convert a block cipher to a stream cipher on paper (e.g., using OFB or output feedback mode), but you will not get the performance. You clearly have no working knowledge here.
I don't doubt that it's hard in existing systems, which might not have AES hardware instructions, spare processing power available etc., but my point is more that, if it were made a design goal, it would be absolutely feasible.
If we can encrypt basically every HTTP request on the Internet, surely we can encrypt a few phone calls too?
But the main problem is not technical, but that stakeholders don't want to anyway (lawful interception etc.), so presumably nothing will change.
> If we can encrypt basically every HTTP request on the Internet, surely we can encrypt a few phone calls too?
Again, you seem to not understand the performance requirements of real-time audio. The amount of data is tinry, but the latency (and particularly jitter) requirements are on a completelt different level than HTTP.
Given that Signal and WhatsApp manage it just fine even on the slowest Android smartphones made in the past decade (without hardware AES acceleration), I’d say you are vastly overestimating the computational load of symmetric encryption.
The added latency is probably undetectable, and unless the CPU is at capacity, there’s no extra jitter either.
Conversely, you might be vastly overestimating channel capacity. If ALL subscriber calls were on WhatsApp or Signal, the network would grind to a halt.
Besides making no sense (modern networks are already largely VoIP based, so what’s the difference from a capacity point of view): What does that have to do with anything discussed in this thread, i.e. the feasibility of encrypting VoIP calls?
I'm really starting to wonder: Did I unintentionally send some kind of bat signal through time, channeling Bellhead objections to the feasibility of VoIP that have been thoroughly and empirically disproven years ago when the POTS largely switched to NGN and IETF standards, and people around the world have moved on entirely to Internet-based OTT VoIP services?
I was not commenting on VoIP, it works nicely and has for a long time, in the network core too. Mobile carriers do not use VoIP with the MS, to my knowledge. There's "Wi-Fi Calling", but that is the closest you're gonna get to packetized data streams reaching your phone (it sees traction where other reception is bad and the carrier has to rely on the Internet). Your use of "Bellhead" as a derogatory term is noted, and is more reflective on you than anything else. Feel free to have the last word, though.
> Mobile carriers do not use VoIP with the MS
No, they do, exclusively. LTE and beyond don’t even support circuit switched calls anymore.
Bellhead wasn’t intended in a derogatory way, just as a reference to the “Netheads vs. Bellheads” schools of thinking about networks.
I do have great respect for historical phone systems and the clever engineers making them work. In terms of absolute reliability, I think VoIP was indeed a step back (although I think that’s mainly due to modern engineering and QA practices than inherent limitations).
Exclusively? You make it sound like VoLTE is mandatory. That is not the case, to my knowledge. On a 4G network, for example, one does not always have VoLTE available, and yet one is always able to place voice calls. Since your conviction is palpable, if you could please provide a reference then that would help further the discussion. If not, then no worries, will find the information on my own.
You're telling me it's absolutely impossible to run a key exchange over these channels?
Not without redesign. I am telling you that whatever key exchange you run, it will result in key material that is accessible by the telco and therefore by your adversary (e.g., PRC). This is true even if you deployed authenticated Diffie-Hellman between endpoints. You might be able to do secure VoIP on top of that, but you cannot use existing telco infrastructure for your calls without expecting the tower to be able to decrypt the call. The ability of the telco to decrypt the call is the very basis of CALEA and LI, or lawful interception modules, and the reason why Salt Typhoon works.
The point is to limit the damage of a key leak, not eliminate it. Limiting the scope of a compromise to a single connection rather than all communications for the past and future is an improvement.
And yeah, of course we're talking about a redesign. If we were content with the status quo why would we be here?
Phase 1 and Phase 2.
Yes, but that "somewhere" could very well be only the two phones involved in a call, with key establishment happening via Diffie-Hellman. Doesn't protect against an active attack, but there's no key to leak inside the network.
Right, let's redesign telecom infrastructure...
After seeing STIR/SHAKEN's implementation details (hey what if we used JWT, and then maximized the metadata leakage of who you're calling), I really do not want to trust telecoms to roll their own crypto.
https://securitycryptographywhatever.com/2024/04/30/stir-sha...
At least they're now only botching protocols instead of self-rolling low-level primitives like block and stream ciphers...
> To me, this absolutely feels like a NOBUS vulnerability, if the SIM manufacturers and/or core network equipment vendors are in cahoots with the NSA and let the NSA take those keys, they can potentially listen in on all mobile phone traffic in the world.
This feels like the obligatory XKCD comic[1] when in reality there isn't any secretive key extraction or cracking...things are just sent unencrypted from deeper into the network to the three-letter-agencies. Telco's are well known to have interconnect rooms with agencies.
[1] https://xkcd.com/538/
> Telco's are well known to have interconnect rooms with agencies.
Maybe these connections are a requirement for their permits in the first place. Who knows?
https://en.wikipedia.org/wiki/Qwest#Refusal_of_NSA_surveilla...
Not a requirement, but if for some reason you don't do the Right Thing that the NSA wants, oh dear your CEO goes to jail, he was a bad boy, look at all that insider trading. You'll do the Right Thing next time we ask, capiche?
https://en.wikipedia.org/wiki/Room_641A
https://en.wikipedia.org/wiki/Dagger_Complex in Germany.
Right next to the former https://de.wikipedia.org/wiki/Posttechnisches_Zentralamt
and https://de.wikipedia.org/wiki/Fernmeldetechnisches_Zentralam...
(similar to Bell Central Office/HQ)
hosting the Deutsche Telekoms early NOC and CIX.
There are also endless ramblings of some german blogger about how he has been sabotaged at the University of Karlsruhe, regarding very early development of encrypted digital telephony and data-transfer in the 80ies/90ies, by very incompetent and corrupted professors, connected to this.
Also related: https://en.wikipedia.org/wiki/Crypto_AG
https://en.wikipedia.org/wiki/Operation_Rubicon
https://en.wikipedia.org/wiki/Maximator_(intelligence_allian...
We're all friends, listening in on the party line :>
Are you suggesting end-to-end encryption? Telecom providers have to implement "lawful intercept" interfaces to comply with the law in many jurisdictions.
I think they're just suggesting improvements on device-to-network encryption. Requiring the sim card secret to live on the sim card and the network means it needs to be transmitted from manufacturing to the network, which increases exposure.
If it were a public/private key pair, and you could generate it on the sim card during manufacturing, the private key would never need to be anywhere but the sim card. Maybe that's infeasible because of seeding, but even if the private key was generated on the manufacturing/programming equipment and stored on the sim card, it wouldn't need to be stored or transmitted and could only be intercepted while the equipment was actively compromised.
This really is the least concern in the entire mess that is phone network security. (Credit and debit card issuers have the same key distribution and derivation problem, but it's ~fine, and there are robust standard solutions, such as deriving per-card keys at the personalization site using tamper-proof HSMs.)
Even if SIM cards were to feature an asymmetric private key: What would you do with it? How would you look it up, and what would you use it for? There is simply no provision for end-to-end encryption in the phone network at the moment.
If there were, it would be a different story, of course, but I doubt that will ever happen.
That's fine. Let them have lawful intercept into my encrypted communications. Let them eat static
As part of lawful intercept, they can't encrypt the traffic and then send the NSA the encrypted traffic. They have to send the unencrypted traffic. Or they go to jail.
you've missed the point. if it is e2ee, then there's nothing but noise going down that lawful intercept. the ISP upheld their obligation, yet nosy bitches get nothing.
The ISP itself can't do E2EE because it's incompatible with lawful intercept.
okay. so let me break it down further. you and i exchange messages via e2ee app. i text you, the app encrypts it, then sends it down the wire. the TLA lawful intercepts that data, but it is just random noise because it is encrypted. your app finally receives the e2ee data, decrypts it, shows you the message.
the data in transit is encrypted beyond anything the ISP has control over, so if the ISP provides lawful intercept they have fulfilled their obligation to the TLA because they let them see the data. it's not the ISP's fault that you and I encrypted the data. this isn't TLS encryption.
if that's not clear enough, then someone else will have to step in as I have taken as far as I can
If it's all encrypted you wouldn't be able to call land lines.
yeah? and? so? who does that? if you're concerned about being intercepted and are still using land lines, then you're really not concerned. we learned that in the 80s. if you're using SMS, you're also not really concerned. friends don't let friends use unencrypted.
I exclusively call landlines from my phone, at least using "actual" phone calls; businesses, to be precise.
For all person-to-person calls, my family and friends have long switched to FaceTime and WhatsApp, which are both encrypted. Why would I pay per minute for a less secure and lower fidelity (HD voice usually does not work internationally) channel?
That said, I really would prefer if the POTS were better secured, given that SSNs and payment card numbers are transmitted over it all the time.
I worked for a major telco in technical support/customer service.
I saw numerous security issues, and when I brought them up, with solutions to improve the service for customers, I was informed the the company would lose money.
Scammers are big customers for telcos, and when they get caught, and banned, they come back again and pay a new connection fee and start the cycle again. Scammers also enable feature upselling, another way to profit from not solving the problem.
"I don't understand it, must be <insert conspiracy>."
You realize this exact thing was in the Snowden docs a decade ago? This exact worry, sim keys being hacked by the NSA, was in the leaks.
You seem to have forgotten, anything inconvenient to the government is a conspiracy theory.
follow the money. perverse incentives at play.
To be honest, the conclusion of the blog post that Freeswitch are not budging from their community release schedule does not surpise me one iota.
Freeswitch used to have a strong community spirit.
Things all changed since they took a more agressive commercial turn, a couple of years ago IIRC.
Since that point you now have to jump through the "register" hoop to gain access to stuff that should be open (I can't remember what it is, IIRC something like the APT repos being hidden behind a "register" wall, something like that).
I don't want to "register" with a commercial company to gain access to the foss community content. Because we all know what happens in the tech world if you give your details to a commercial company, the salesdroids start pestering you for an upsell/cross-sell, you get put in mailing lists you never asked to be put on, etc.
In Signalwire’s defence, reading through the old mailing list, I got the feeling they drove the development of Freeswitch for years without being properly compensated by downstream projects. Sadly I’ve also seen other parts of the Voip community recalibrate their generosity when it comes to open source and I honestly can’t blame them.
The team behind Matrix.org talked about a similar problem in one of their FOSDEM’25 talks: commercial vendors free loading on development.
It's MPL licensed. Perhaps they should have chosen a different license if they want to be compensated.
I think it's fair to assume that between foreign threat actors, the Five Eyes/other Western pacts, and the demand to make the line go up, there's no real anonymity online. If they want you, they've got the means to get you.
In reality that's really no different than the pre-internet age. If you don't want your stuff intercepted, you need to encrypt it by means that aren't trivial to access electronically for a major security apparatus. Physical notes, word-of-mouth, hand signals, etc.
Also, you need to be ready for the consequences of what you say and do online should a state actor decide to allocate the resources to actually act upon the data they have.
From the article I am not totally convinced that "Telecom security sucks today", given they just randomly picked Freeswitch to find a buffer overflow. "Telecom stacks" might or might be not insecure but what's done here is very weak evidence. The Salt Typhoon attacks allegedly exploited a Cisco vulnerability, although the analysts suggest the attackers have been using proper credentials (https://cyberscoop.com/cisco-talos-salt-typhoon-initial-acce...) So nothing to do with Freeswitch or anything.
Cisco Unified Call Manager almost certainly has vulnerabilities, as does Metaswitch which has shambled along in network cores after Microsoft publicly murdered it, Oracle SBC is often wonky just doing the basics, whatever shambling mess Teams is shipping this week for their TRouter implementation definitely has Denial of Service bugs that I can't properly isolate.
Lets not even talk about the mess of MF Tandems or almost every carrier barebacking the web by slinging raw unencrypted UDP SIP traffic over the internet...
It is possible to build secure systems in this space, but instead we have almost every major telecom carrier running proprietary unmodifiable platforms from long dead companies or projects (Nortel, Metaswitch,etc) and piles of technical debt that are generally worse than the horribly dated and unpatched equipment that comprises their networks.
I find it absolutely insane that the industry standard for SIP trunks is unencrypted UDP, usually using IP-based authentication.
When I asked a popular VoIP carrier about this a while back, they argued that unencrypted connections were fine because the PSTN doesn't offer any encryption and they didn't want to give their customers a false sense of security. While technically true, this doesn't mean we shouldn't at least try to implement basic security where we can - especially for traffic sent over the public Internet.
PSTN starts at the home router these days, I don't think I can get an actual analog line in my house.
My DOCSIS service provider turned off encryption. That's likely due to the certificate expiring on a popular modem brand. Key management is hard, certificate management is hard. Especially when they don't care about security. The encryption was only DES to begin with instead of AES which is supported in DOCSIS but few service providers bother. Anyone who has the tools to sniff DOCSIS can eavesdrop on my provider's nodes and hear the incoming leg of phone calls.
Great example. Anyone who cavalierly states "let's do PKI!" has not done PKI.
Painting a dire picture here!
It'd be lovely to see some nations of the world pour some serious money into the various Linux Foundation (or other open source) telco & cellular projects.
Pouring money is not how you get good quality software. You need a company driving product quality. Most Linux foundation projects have companies heavily invested in productionizing the projects and that leads to them contributing to them to ensure high quality code. Code without a driving product tends to wander aimlessly.
That's, like, your opinion man.
Maybe the money should have more strings attached, be attached to grant proposals, whatever.
I don't see that that is an important or clarifying distinction. Governments should be directly helping, with money, somehow. Collectivizing the investment is better returns and far better outcomes, open source is the only way you're going to avoid risking your investment in a single company that may over time fail. Having your nation take its infrastructure seriously should be obvious, and this is how. And I disagree that good things only happen at companies. The post I was responding to stands as incredibly broadscale evidence that that often doesn't happen.
Governments are often the ones the encryption is protecting us from - they're not going to fund better encryption.
Linux foundation is the thing financing backdoors. do not confuse it with Linux. the only money from the foundation that goes to actual Linux are a couple build servers. and one event sponsorship. absolutely nothing else.
Call Manager etc have zero to do with SP networks.
>Cisco Unified Call Manager
Is that not a kind of business/enterprise thing?
"Telecom" to me is like a network core equipment and radio towers - https://www.cisco.com/c/en/us/products/wireless/pgw-packet-d...
I worked with telecom code. It's code that parses complicated network protocols with several generations of legacy, often written in secrecy (security by obscurity), and often in C/C++.
There's just no way it can be insecure. Right.
That's also how the majority of network appliances are handled outside of Telecom.
Yep. And the network appliance world also tried to make that a "feature", by making things like "management VLANs" and pretending that you don't need to be secure because of it.
I don't doubt that this cruft is insecure. It's just a bit of a stretch to get to that conclusion from finding a potential buffer overflow in Freeswitch. Maybe it's not a stretch but just a conclusion by analogy but then you might just say "all software is insecure".
I've had a few conversations with [security nerds more familiar with telecom] since SignalWire broke embargo.
The "everything sucks and there's no motive to fix it" was a synopsis because, frankly, those conversations get really hard to follow if you don't know the jargon. And I didn't feel like trying to explain it poorly (since I don't understand the space that well, myself), so I left it at what I wrote.
(I didn't expect Hacker News to notice my blog at all.)
As security nerd working within telecom agreed. Nobody really cares about security issues. And when people already struggle to care about the issues it gets even worse when fixing some of the issues (such as SS7 vulns) requires coordination with telcos around the world. cape[1] at least seems like its a breath of fresh air within the space.
[1] - cape.co
I'll have to try to find a video of the HOPE presentation where I first heard about SS7 and how riddled it was with known vulnerabilities, my jaw hit the floor.
Can confirm. It’s not even nonchalance, but outright hostility to security because that sounds like work and change. And if there’s anyone who hates change, it’s telcom. They still resent having to learn voip and it could have kids in college at this point.
cape.co marketing sounds suspiciously like the cia front in Switzerland in the late 90s.
"hey you who needs privacy, here's something that somehow costs even less than the ones selling your data"
> (I didn't expect Hacker News to notice my blog at all.)
Your blog actually gets posted somewhat regularly [0]. I actually remembered it, because it’s one of the rare cases where I like the "cute" illustrations.
[0]: https://hn.algolia.com/?q=https%3A%2F%2Fsoatok.blog%2F
This blog article is a combination of "I did a thing I do for a living" plus "recipient of my report does not share my (and the rest of the Security Industry) values", and concludes that Telecom security sucks.
It's very nice that the author spent their free time looking at code, found a bug and reported it -- I don't want to discourage that at all, that's great. But the fact that one maintainer of one piece of software didn't bow and scrape to the author and follow Security Industry Best Practises, is not a strong basis for opining that "Telecom security sucks today" (even if it does)
If someone came to you with a bug in your code, and they didn't claim it was being actively exploited, and they didn't offer a PoC to confirm it could be exploited... why shouldn't you just treat it as a regular bug? Fix it now, and it'll be in the next release. What's that? People can see the changes? Well yes, they can see all the other changes too. Good luck to them finding an exploit, you didn't.
The same thing happens in Linux distros. A security bug gets reported. Sometimes, the upstream author is literally dead, not just intransigent. If you want change on your own timeline, make your own releases.
When the code is sprintf(stackbuf, "%s", attacker_supplied_input) in 2025, I expect some serious bowing and scraping.
In fairness, with that level of vulnerability in the code, fixing it is like using paper towels to mop up the ocean.
Yeah if anyone thinks people don't just run searches for `sprintf` they're pretty naive.
One area where freeswitch is probably used quite often (and without support contract) are BigBlueButton installations (virtual classroom system) in schools and universities. I am more worried about them then about telcos.
I wonder how many people are even using the XML RPC module. It doesn't get loaded by default.
Edit: 468 according to Shodan. I'm wondering if senddirectorydocument gets used at all by the XML RPC module.
Following up on this, I was unable to get it to do anything.
Any ideas on triggering it? I imagine if we get a PoC that at least causes a segfault or whatever, they will be more likely to do a security release.I maybe wrong, but I think you need to enable the module for API access.
Yeah, it's enabled with `load mod_xml_rpc`. Listening on 8080.
hmmmFrom the article. "This is not typically a problem, since most browsers don’t support URLs longer than 2048 characters, but the relevant RFCs support up to about 8 KB in most cases. (CloudFlare supports up to 32KB.)"
So obviously relying on browsers is not enough, but a nitpick. The article links to a stackoverflow which actually notes browsers support a lot more.
Ah, that's fair.
I highly recommend checking out P1 Security's presentations around mobile telco security: https://www.slideshare.net/slideshow/day1-hacking-telcoequip....
It's old but there's no reason to believe things have improved as there are zero incentives to. Also, software security vulnerabilities are only part of the problem - the other part is that telcos willingly outsource control and critical access to the lowest bidder: https://berthub.eu/articles/posts/5g-elephant-in-the-room/
The really good hacks happen with CAMEL MAP injection. Controls all sorts of goodness like SMS, USSD, and the crown jewel: location services.
Many a "bulk SMS" provider in places like the richer carribean islands, and Indonesia that do a lot more than send spam.
To add, MAP is 2G and 3G.
So, it is old. 2G was designed in the 90s.
I don't really know what people expect? I'm just happy it works at all, lol.
2G GSM piggybacked its wired backed on the ISDN telecom standard (which is which your phone number is called a MSISDN).
Today's CAMEL (MAP and CAP) signalling is an evolution of the ISDN signalling which traces it roots back into (amongst others) the SINAP signalling protocol and the SS7 network stack from even before that.
SS7 is early 1970s stuff. From a more innocent time.
No major carrier is running FreeSwitch or Asterisk at the core.
Motorola's low-end 911 phone system, Emergency CallWorks (ECW), is Asterisk with proprietary modules running on Linux under Proxmox. Granted, Motorola is killing the produce, but it's out there. The one I babysit is heavily firewalled but I'd imagine not all of them are.
That is not the core, however; the core means the central pieces of a large telecom, the part that handles all the needed data to set up say, 10,000 or more calls per second.
For sure. The implicit trust that participants on the PSTN appear to give to each other, imparts a certain amount of undue influence to the constellation of dodgy systems interconnected to it.
No, but plenty of businesses that process your call data, whether it's for call recording, transcription, IVRs, speech analytics, CRM integration, call queuing, auto dialing, or SMS/chat features, are liable to be running stuff like FreeSWITCH, Asterisk, or similar somewhere in their stack.
Any business with a PBX that wants to do more than just basic call routing and PSTN connectivity is likely using third party tools. And a significant number of those tools are built on FreeSWITCH, Asterisk, or similar.
Depends on your definition of ‘major’
I am in the same boat as OP and the blog's example is a PBX software for business. I was also confused.
Major carriers are like Vodafone, T-Mobile, O2, Telia etc :)
This very much depends on your definition of major.
Is that a Foss or GPL compliant codebase/OS?
I've been beating the drum about this to everyone who will listen lately, but I'll beat it here too! Why don't we use seL4 for everything? People are talking about moving to a smart grid, having IoT devices everywhere, putting chips inside of peoples' brains (!!!), cars connect to the internet, etc.
Anyway, it's insane that we have a mathematically-proven secure kernel, we should use it! Surely there's a startup in this somewhere..
Rewriting all software would cost infinite money.
New smart grids with new software do not require a rewrite!
Almost all vulnerabilities are in apps and libraries which seL4 does little or nothing to solve. The only solution is secure coding across the entire stack which will reveal that much of the existing code is so low-quality that it just has to be thrown away and rewritten.
They will!
I imagine most of the people running Freeswitch have their own patches on top of the community releases anyway so we're compiling those security fixes in to our own builds. That's what we did anyway when I worked for a place using Asterisk, Freeswitch, and OpenSER/Kamailio whatever it is called this decade.
"potentially thousands of telecom stacks around the world that SignalWire has decided to keep vulnerable until the Summer, even after they published the patches on GitHub."
I would dare to whisper that the lack of security suits the NSA just fine. However you can add just about every technically competent nation state, organised crime, major corporations, and a collection of non-state actors. About the only group besides us nomies who I think might really care about this are the payment rails folks as this insecurity facilitates more fraud.
I have gone on about this before but most carriers have a psychological aversion to security, and most of their vendors adopt the same.
They see themselves as the wire, and thus completely incapable of being targeted by hostile third parties.
Non exhaustive list of problems I have seen:
Credit cards stored in plaintext on the carriers wordpress website. esxi and drac ports publicly available to the internet, not patched. inbound authentication not dropped by core infrastructure, log files just filling up with brute force attempts (often successful) Software vendors not implementing carrier network standards and telling everyone they know better. tech support opening socks proxy ports for technical support reasons and then leaving them open, where they get abused for netflix traffic. Field techs running around with core infrastructure passwords written on their paperwork Vulnerable hardware remaining unpatched and available to the internet for years - particularly fortigate stuff. Technicians building unencrypted pptp vpns on client infrastructure and leaving them open for years.
It doesnt surprise me that freepbx/asterisk etc are full of issues. They only get yelled at when they push a change that knocks some eccentric sip config offline, no one cares if they maintain vulnerable code as long as it works. Doubly so because theres a cottage industry in locating and using vulnerable SIP credentials for fraudulent phone calls.
Three thoughts.
1) To be slightly annoyingly contrarian, there is money to be made in secure telecom; Skype founders made a bundle, no?
2) This article conflates freeswitch with major telecom carrier infeastructure. My impression is that 30+% of the problem with security is not technical but economic. Carriers outsource a ton of their operations, effectively outsourcing most efforts to care about security... which never helps security posture unless the outsourcer considers their core value proposition, which they generally don't, instead pushing themselves as a cost/capitalization play.
3) No discussion of Matrix here as where things are headed, security-wise? https://matrix.org/blog/2024/10/29/matrix-2.0-is-here/
> Carriers outsource a ton of their operations
most of them - if not all of them. This article is from 2021:
https://berthub.eu/articles/posts/how-tech-loses-out/
Thanks for the pointer. What a /great/ article.
> 3) No discussion of Matrix here as where things are headed, security-wise?
From the same author (that is to say, from me): https://soatok.blog/2024/08/14/security-issues-in-matrixs-ol...
Why was FreeSWITCH written in C? Even in 2006 there were more secure alternatives.
We as an industry keep poking each ourselves in our collective eye with a sharp stick, wondering why it hurts.
It's the lang devs were comfortable with and some of them came from the asterisk world.
In a past life I worked at AWS as a support engineer
I once got a ticket from T-Mobile (US) asking what "AWS's best practices were around security patching. How long should we wait?"
A week later they admitted to an enormous data breach
I'd say I switched phone carriers after that, but after working in the ISP market I already knew they were all absolute clown shows where all the money only went to C-levels and not infrastructure or security
the paid version is probably not much better
Time to rewrite telecom software in Rust?
I’m guessing this is a joke.
Rust only fixes the memory safety issues. It doesn’t fix bad software design, the problem where we have to trust other companies to keep their security issues under control (eg. Cisco), and it can’t undo bad decisions that have become industry standards (eg. SS7)
SS7, to summarize charitably, was built assuming trust exists. Some of the vulnerabilites aren't vunerabilities, they're features!
Do you believe this statement refutes my claim that SS7 was badly designed?
Yes, given the SS7 design started in the 1970s when telecommunications was either the purview of a government agency or a state granted monopoly (depending on where you were in the world. in which case it is perfectly rational to assume that your counterparties are trusted.
SS7 wasn't a bad decision.
Allowing any random bozo to connect to the network's trusted center was a bad decision.
If the regulatory mandate to allow interconnection had also mandated the development and usage of a secure protocol for that interconnection, we'd be fine. But it mandated the opposite. Politicians got us into this mess, not programmers.
I would argue it’s the managers of the programmers who failed to foresee this as a future requirement, hence they didn’t tell the programmers to make it resilient to reasonably foreseeable changes to the operating environment.
It was not reasonably foreseeable. The Bell system had been a government-blessed monopoly since its inception. Pigs would fly before scammers were allowed to connect to raw SS7.
> (eg. SS7)
I don't have a lock on my mailbox. It is bad that the "low trust" internet overflows into my everyday life. I would rather that there was some separation of telephone calls, local community and banking etc from the lawless voids, than normalizing all these scams.
Telephone scam calls are mostly an internet problem.
I don’t get how your anecdote relates to SS7. SS7 is available country-wide (I’m assuming it doesn’t directly cross national borders) and the surface area of all of the cell towers and data centers they connect to is very large. Even larger if you consider all of the software that runs on the devices that are legitimately connected to that network. This isn’t even remotely comparable to some fictional high trust small rural town where everyone knows everyone.
I do have a lock on my mailbox, but it has to adhere to the USPS skeleton keys (which have been leaked and are exploited by thieves). Another example of bad design, or at least design that wasn’t able to withstand reasonably foreseeable changes to the operating environment.
I wonder who in practice runs XMLRPC today. My feeling id that nobody looked at that code in decades, because nobody cares.
SS7 again... ofc. Kinda tiresome now.
Yes, insecure, but needed. Unless you want to shut down 2G and 3G worldwide. It is happening, slowly.
The FreeSwitch stuff? Telcom buy from vendors like Nokia, Cisco, Ericsson, Huawei where they can't see src anyway.
[dead]
This made me think, how many people have tried feeding some of this critical code to the best LLM models and asking it to point out any bugs?
First, can you point to examples where using LLMs to find vulnerabilities works?
You probably don't need an LLM to find vulnerabilities in software written like this. It took me a few minutes with GitHub in a web browser, but I'm sure you could make some headway with semgrep if you were bold enough.