Tor 0day: Stopping Tor Connections
Thursday, 23 July 2020
When coming across a security vulnerability, I have a basic philosophy: Try your best to report it to the right people. Sometimes the reporting is painless. Usually it's a little challenging. Over a decade ago, I tried to report an issue to Verisign. It took weeks of constantly pestering a security staff member before he passed the vulnerability up the chain. His manager saw the issue, thanked me on the phone, and shipped me a box of swag to show that my effort was appreciated. (I was happy with them fixing the issue. To me, the phone call and swag was above and beyond.)
Unfortunately, sometimes companies are non-responsive. At that point, I have a few options. I can sell the vulnerability to someone else who will certainly exploit it. I can just let it sit -- maybe the bug will be fixed by coincidence or become obsolete, or maybe I'll find another use for it later. (I have a large collection of sitting vulnerabilities, some dating back decades.) However, sometimes I have reasons for needing a specific issue fixed soon. If the company doesn't respond to security reports, then maybe they will react to public shaming.
For people who follow my blog, you know that I've literally spent years trying to report security vulnerabilities to the Tor Project. Just finding who to report bugs to was like a masochistic scavenger hunt. After my public shaming of the Tor Project (in 2017), they changed their web site design to make it easier to report vulnerabilities. They also opened up their bug bounty program at HackerOne.
Unfortunately, while it is easier now to report vulnerabilities to the Tor Project, they are still unlikely to fix anything. I've had some reports closed out by the Tor Project as 'known issue' and 'won't fix'. For an organization that prides itself on their secure solution, it is unclear why they won't fix known serious issues.
Over three years ago, I tried to report a vulnerability in the Tor Browser to the Tor Project. The bug is simple enough: using JavaScript, you can identify the scrollbar width. Each operating system has a different default scrollbar size, so an attacker can identify the underlying operating system. This is a distinct attribute that can be used to help uniquely track Tor users. (Many users think that Tor makes them anonymous. But Tor users can be tracked online; they are not anonymous.)
I couldn't find a direct way to report the bug to the Tor Project. Eventually, I gave up on their reporting scavenger hunt and blogged about the vulnerability. I included details and a working example.
A lot of people in the Tor community wrote to me, effectively saying "so what?" However, the initial response from the Tor Project confirmed the significance. They entered the vulnerability into their system (defect #22137) and gave it a "high" priority. When they opened up their bug bounty program on HackerOne, they even paid me a bounty for this issue. This issue was reported 3 years ago, on 22-July-2017 via HackerOne. It was assigned bug #252580, a bounty was paid and the issue was closed as 'Resolved'. The HackerOne bug was publicly disclosed three months later (20-Oct-2017).
But that's where the positive progress stopped. Although it was marked as 'resolved', the issue was never fixed. Rather, the Tor Project pushed it upstream, to Mozilla. (The Tor Browser is based on Mozilla's Firefox web browser.) Firefox Bug 1397996 sat unassigned for two years. A year after that, the person assigned to the bug removed himself and wrote, "Not actively working on this, unassign myself." So that's three years that a high priority bug at the Tor Project has sat unaddressed, even though they claim to have resolved the issue.
It isn't like the Tor Project doesn't have options for fixing this issue without Mozilla's help. They just need to define a default scrollbar width rather than inherit the one from the operating system. With all of the other customizations that they add to make the Tor Browser, this is an easy one to fix -- but they have decided to not fix it.
The scrollbar profiling vulnerability is an example of a 0day in the Tor Browser. But there are also 0days for the Tor network. One 0day for the Tor network was reported by me to the Tor Project on 27-Dec-2017 (about 2.5 years ago). The Tor Project closed it out as a known issue, won't fix, and "informative".
Let's start with a basic premise: let's say you're like some of my clients -- you're a big corporation with an explicit "no Tor on the corporate network" rule. This is usually done to mitigate the risks from malware. For example, most corporations have a scanning proxy for internet traffic that tries to flag and stop malware before it gets downloaded to a computer in the company. Since Tor prevents the proxy from decoding network traffic and detecting malware, Tor isn't permitted. Similarly, Tor is often used for illegal activities (child porn, drugs, etc.); blocking Tor reduces the risk from employees using Tor for illegal purposes. Although denying Tor can also mitigate the risk from corporate espionage, that's usually a lesser risk than malware infections and legal concerns. (Keep in mind, these same block and filtering requirements apply to nation-states, like China and Syria, that want to control and censor all network traffic. But I'm going to focus on the corporate environment.)
It's one thing to have a written policy that says "Don't use Tor." However, it's much better to have a technical solution that enforces the policy. So how do you stop users from connecting to the Tor network? The easy way is to download the list of Tor relays. A network administrator can add in a firewall rule blocking access to each Tor node.
However, what if there was a distinct packet signature provided by every Tor node that can be used to detect a Tor network connection? Then you could set the filter to look for the signature and stop all Tor connections. As it turns out, this packet signature is not theoretical.
Tor uses TLS for negotiating network security. However, Tor is built on zero-trust; each TLS certificate is randomly generated when the daemon starts since it never needs client validation. Each connection from a Tor client to a Tor server looks like:
Similarly, I scanned every known Tor node. Each matched this Tor-specific certificate profile. That makes the detection 100% accurate; no false positives and no false negatives. (Although now that I've made this public, someone could intentionally generate false-positive or false-negative certificates. The false-positives are relatively easy to construct. The false-negatives will require editing the Tor daemon's source code.)
While a scanner could be used to identify and document every Tor server, corporations don't need to do that. Corporations already use stateful packet inspection on their network perimeters to scan for potential malware. With a single rule, they can also check every new connection for this Tor signature. Without using large lists of network addresses, you can spot every connection to a Tor node and shut it down before the session layer (TLS) finishes initializing, and before any data is transferred out of the network.
Let's see:
Earlier this month, Katie was interviewed by the Vergecast podcast. I had expected her to praise the benefits of vulnerability disclosure and bug bounty programs. However, she surprised me. She has become disenchanted by how corporations are using bug bounties. She noted that corporate bug bounties have mostly been failures. Companies often prefer to outsource liability rather than solve problems. And they view the bug bounties as a way to pay for the bug and keep it quiet rather than fix the issue.
Every problem that Katie brought up about the vulnerability disclosure process echoed my experience with the Tor Project. The Tor Project made it hard to report vulnerabilities. They fail to fix vulnerabilities. They marked issues as 'resolved' when they were never fixed. They outsource simple issues, like passing a simple scrollbar issue upstream to Firefox where it is never fixed. And they make excuses for not addressing serious security issues.
During the interview, she mentioned that researchers and people reporting vulnerabilities only have a few options: try to report it, sell it, or go public. I've tried reporting and repeatedly failed. I've sold working exploits, but I also know that they can be used against me and my systems if the core issues are not fixed. (And even the people who buy exploits from me would rather have these vulverabilities fixed.) That leaves public disclosure.
In future blog posts, I will be disclosing more Tor 0day vulnerabilities. Most (but probably not all) are already known to the Tor Project. It won't be every blog entry (I also have non-Tor topics that I want to write about), but I've got a list of vulnerabilities that are ready to drop. (And for the Tor fanboys who think "use bridges" will get around this certificate profiling exploit: don't worry, I'll burn bridges next.)
Unfortunately, sometimes companies are non-responsive. At that point, I have a few options. I can sell the vulnerability to someone else who will certainly exploit it. I can just let it sit -- maybe the bug will be fixed by coincidence or become obsolete, or maybe I'll find another use for it later. (I have a large collection of sitting vulnerabilities, some dating back decades.) However, sometimes I have reasons for needing a specific issue fixed soon. If the company doesn't respond to security reports, then maybe they will react to public shaming.
For people who follow my blog, you know that I've literally spent years trying to report security vulnerabilities to the Tor Project. Just finding who to report bugs to was like a masochistic scavenger hunt. After my public shaming of the Tor Project (in 2017), they changed their web site design to make it easier to report vulnerabilities. They also opened up their bug bounty program at HackerOne.
Unfortunately, while it is easier now to report vulnerabilities to the Tor Project, they are still unlikely to fix anything. I've had some reports closed out by the Tor Project as 'known issue' and 'won't fix'. For an organization that prides itself on their secure solution, it is unclear why they won't fix known serious issues.
The Penultimate Straw
Two events really set me off this year. The first issue was related to a massive DDoS attack over the Tor network last February. Lots of onion services went offline, and many relays crashed. My own onion service was hard hit but managed to stay up after I identified the root cause and patched my Tor daemon. I reported this vulnerability to the Tor Project (HackerOne bug #789065). The outcome was less than stellar:- First, the Tor Project asked for a proof of concept. I responded with source code and log files.
- Then they asked for more details about how it worked. I provided an extremely detailed description. This resulted in a lot of bidirectional communication with descriptions, explanations, and examples. (At this point, I thought things were going well.)
- After a lot of back-and-forth technical discussions, the Tor Project's representative wrote, "I'm a bit lost with all this info in this ticket. I feel like lots of the discussion here is fruitful but they are more brainstormy and researchy and less fitting to a bug bounty ticket." They concluded with: "Is there a particular bug you want to submit for bug bounty?" In my opinion, describing a vulnerability and mitigation options is not "brainstormy and researchy". To me, it sounds like they were either not competent enough to fix the bug, or they were not interested. In any case, they were just wasting time.
The Final Straw
The second issue, when I decided to go public with Tor 0days, happened last month. That's when, after three years of waiting, I gave up on the Tor Project.Over three years ago, I tried to report a vulnerability in the Tor Browser to the Tor Project. The bug is simple enough: using JavaScript, you can identify the scrollbar width. Each operating system has a different default scrollbar size, so an attacker can identify the underlying operating system. This is a distinct attribute that can be used to help uniquely track Tor users. (Many users think that Tor makes them anonymous. But Tor users can be tracked online; they are not anonymous.)
I couldn't find a direct way to report the bug to the Tor Project. Eventually, I gave up on their reporting scavenger hunt and blogged about the vulnerability. I included details and a working example.
A lot of people in the Tor community wrote to me, effectively saying "so what?" However, the initial response from the Tor Project confirmed the significance. They entered the vulnerability into their system (defect #22137) and gave it a "high" priority. When they opened up their bug bounty program on HackerOne, they even paid me a bounty for this issue. This issue was reported 3 years ago, on 22-July-2017 via HackerOne. It was assigned bug #252580, a bounty was paid and the issue was closed as 'Resolved'. The HackerOne bug was publicly disclosed three months later (20-Oct-2017).
But that's where the positive progress stopped. Although it was marked as 'resolved', the issue was never fixed. Rather, the Tor Project pushed it upstream, to Mozilla. (The Tor Browser is based on Mozilla's Firefox web browser.) Firefox Bug 1397996 sat unassigned for two years. A year after that, the person assigned to the bug removed himself and wrote, "Not actively working on this, unassign myself." So that's three years that a high priority bug at the Tor Project has sat unaddressed, even though they claim to have resolved the issue.
It isn't like the Tor Project doesn't have options for fixing this issue without Mozilla's help. They just need to define a default scrollbar width rather than inherit the one from the operating system. With all of the other customizations that they add to make the Tor Browser, this is an easy one to fix -- but they have decided to not fix it.
Dropping 0Days
A "0day" (pronounced 'zero-day' or 'oh-day') is any exploit that has no known patch or wide-spread solution. A 0day doesn't need to be unique or novel; it just needs to have no solution. I'm currently sitting on dozens of 0days for the Tor Browser and Tor network. Since the Tor Project does not respond to security vulnerabilities, I'm just going to start making them public. While I found each of these on my own, I know that I'm not the first person to find many of them.The scrollbar profiling vulnerability is an example of a 0day in the Tor Browser. But there are also 0days for the Tor network. One 0day for the Tor network was reported by me to the Tor Project on 27-Dec-2017 (about 2.5 years ago). The Tor Project closed it out as a known issue, won't fix, and "informative".
Let's start with a basic premise: let's say you're like some of my clients -- you're a big corporation with an explicit "no Tor on the corporate network" rule. This is usually done to mitigate the risks from malware. For example, most corporations have a scanning proxy for internet traffic that tries to flag and stop malware before it gets downloaded to a computer in the company. Since Tor prevents the proxy from decoding network traffic and detecting malware, Tor isn't permitted. Similarly, Tor is often used for illegal activities (child porn, drugs, etc.); blocking Tor reduces the risk from employees using Tor for illegal purposes. Although denying Tor can also mitigate the risk from corporate espionage, that's usually a lesser risk than malware infections and legal concerns. (Keep in mind, these same block and filtering requirements apply to nation-states, like China and Syria, that want to control and censor all network traffic. But I'm going to focus on the corporate environment.)
It's one thing to have a written policy that says "Don't use Tor." However, it's much better to have a technical solution that enforces the policy. So how do you stop users from connecting to the Tor network? The easy way is to download the list of Tor relays. A network administrator can add in a firewall rule blocking access to each Tor node.
0Day #1: Blocking Tor Connections the Smart Way
There are two problems with the "block them all" approach. First, there are thousands of Tor nodes. Checking every network connection against every possible Tor node takes time. This is fine if you have a slow network or low traffic volume, but it doesn't scale well for high-volume networks. Second, the list of nodes changes often. This creates a race condition, where there may be a new Tor node that is seen by Tor users but isn't in your block list yet.However, what if there was a distinct packet signature provided by every Tor node that can be used to detect a Tor network connection? Then you could set the filter to look for the signature and stop all Tor connections. As it turns out, this packet signature is not theoretical.
Tor uses TLS for negotiating network security. However, Tor is built on zero-trust; each TLS certificate is randomly generated when the daemon starts since it never needs client validation. Each connection from a Tor client to a Tor server looks like:
- Client begins the TCP three-way handshake by sending a TCP SYN packet to the Server.
- Server responds with a SYN-ACK.
- Client sends an ACK to complete the three-way TCP handshake.
- Client sends a TLS Client-Hello request. This is the first data packet from the client.
- Server responds with a TLS Server-Hello and includes the certificate that was randomly generated when the server first started.
- Self-signed. Typically, TLS includes a chain of x509 certificates for authentication. With the Tor daemon, the chain only contains one certificate, meaning it is self-signed.
- Specific ordering. For the certificate, there are a variety of fields that can be in any order, but the Tor daemon always uses the same fields in the same order: signature, issuer, validity, subject, and then public key info. There is no other information in the server's certificate. In contrast, a typical certificate usually has multiple extensions and additional data fields.
- One issuer. This record only contains a common name (CN) that starts with "www." and ends with ".com". In between are 8-20 random letters and numbers. This is unusual since the issuer CN is usually the proper name of the issuing authority. With Tor, there are no country (C), state (ST), organization (O) or other issuer fields that are typically seen with both authenticated and self-signed certificates.
- One subject. The subject common name (CN) starts with "www." and ends with ".net". In between are 8-20 random letters and numbers that are not the same as the issuer. Like the issuer record, there are no other fields (S, ST, O, etc.) that are commonly found with real certificates.
On a very technical node:
For my own packet scanner, I wrote a function that walks the x509 certificate's ASN.1 structure and generates a packed signature that shows data and scope. The Tor server's signature looks like:
{{[2],#,{1.2.840.113549.1.1.#,NULL},{{{2.5.4.3,"www.X.com"}}},{"#Z","#Z"},{{{2.5.4.3,"www.X.net"}}},{{1.2.840.113549.1.1.1,NULL},D}},{1.2.840.113549.1.1.#,NULL},D}
where:
This example is from a Tor server:
- "X" is 8-20 characters in the range [a-z2-7]. This character range is because Tor uses Base32 encoding.
- "D" is variable data.
- "#" is a number (can be multiple digits).
- All other characters are literals that must match in the same order.
{{[2],10893829876978619801,{1.2.840.113549.1.1.11,NULL},{{{2.5.4.3,"www.xds4wpy6r7uq.com"}}},{"171228000000Z","180517000000Z"},{{{2.5.4.3,"www.ph4l62eo3zyqq.net"}}},{{1.2.840.113549.1.1.1,NULL},Data[271]}},{1.2.840.113549.1.1.11,NULL},Data[129]}
ASN.1 uses dotted number sequences to define specific codes. For example, 2.5.4.3 is the identifying common name. It appears twice in the signature: once for the issuer and once for the subject. The ASN.1 code 1.2.840.113549.1.1.11 identifies sha256 with RSA encryption. My signature uses "1.2.840.113549.1.1.#" since the specific encryption can vary based on the version of the server's SSL library. (Oh yeah! Profile the server's library! Another 0day!).
When the packet sniffer sees a TLS server-side certificate, it generates a signature. If the signature matches the pattern for a Tor server, the scanner flags the connection as a Tor connection. (This is really fast.)
Validating the Vulnerability
Back in 2017, I used a scanner and Shodan to search for TLS certificates. In theory, it is possible for there to be some server with a server-side TLS certificate that matches this signature but that isn't a Tor node. In practice, every match was a Tor node. I even found servers running the Tor daemon and with open onion routing (OR) ports that were not in the list of known Tor nodes. (Some were non-public bridges. Others were private Tor nodes.)Similarly, I scanned every known Tor node. Each matched this Tor-specific certificate profile. That makes the detection 100% accurate; no false positives and no false negatives. (Although now that I've made this public, someone could intentionally generate false-positive or false-negative certificates. The false-positives are relatively easy to construct. The false-negatives will require editing the Tor daemon's source code.)
While a scanner could be used to identify and document every Tor server, corporations don't need to do that. Corporations already use stateful packet inspection on their network perimeters to scan for potential malware. With a single rule, they can also check every new connection for this Tor signature. Without using large lists of network addresses, you can spot every connection to a Tor node and shut it down before the session layer (TLS) finishes initializing, and before any data is transferred out of the network.
Tor Project's Reply
I reported this simple way to detect Tor traffic to the Tor Project on 27-Dec-2017 (HackerOne bug #300826). The response that I got back was disappointing.Hello and thanks for reporting this issue!
This is a known issue affecting public bridges (the ones distributed via bridgedb); see ticket #7349 for more details. This issue does not affect private bridges (the ones that are distributed in a P2P adhoc way). As indicated in the ticket, to fix this problem, we are aiming to make it possible to shutdown the ORPort of Tor relays. In our opinion, we should not to try to imitate normal SSL certs because that's a fight we can't win (they will always look differently or have distinguishers, as has been the case in the pluggable transports arms race).
Unfortunately, ticket #7349 is not straightforward to implement and has various engineering complexities; please see the ticket for more information
Due to the issue being known and planned to be fixed, I'm marking this issue as Informative.
Let's see:
- They say it is a known bug and not fixed.
- They referred me to another bug (#7349, "Very High" priority) that had already been opened for five years. (It has now been open for eight years.)
- They only viewed it as a risk to bridges, not as a risk to all Tor traffic. Even though it impacts all Tor users, including users who do not use bridges.
- They gave a vague opinion with an unjustifiable explanation. ("In our opinion, we should not to try to imitate normal SSL certs because that's a fight we can't win", "not straightforward to implement", and "has various engineering complexities.")
- They referred me to the technical discussion in the related (unfixed) bug, but I didn't see any reason that they couldn't add more variety in order to prevent packet profiling and filtering. As a test, I changed the random certificate's profile on one of my Tor daemons and it continued to work without a problem.
tor_tls_context_init_certificates. The first few lines generate the 8-20 random character domain names, and the rest generates the certificates without any other settings.There are lots of options for fixing this problem. Here are just a few:nickname = crypto_random_hostname(8, 20, "www.", ".net");
#ifdef DISABLE_V3_LINKPROTO_SERVERSIDE
nn2 = crypto_random_hostname(8, 20, "www.", ".net");
#else
nn2 = crypto_random_hostname(8, 20, "www.", ".com");
#endif
/* Generate short-term RSA key for use with TLS. */
if (!(rsa = crypto_pk_new()))
goto error;
if (crypto_pk_generate_key_with_bits(rsa, RSA_LINK_KEY_BITS)<0)
goto error;
/* Generate short-term RSA key for use in the in-protocol ("v3")
* authentication handshake. */
if (!(rsa_auth = crypto_pk_new()))
goto error;
if (crypto_pk_generate_key(rsa_auth)<0)
goto error;
/* Create a link certificate signed by identity key. */
cert = tor_tls_create_certificate(rsa, identity, nickname, nn2,
key_lifetime);
/* Create self-signed certificate for identity key. */
idcert = tor_tls_create_certificate(identity, identity, nn2, nn2,
IDENTITY_CERT_LIFETIME);
/* Create an authentication certificate signed by identity key. */
authcert = tor_tls_create_certificate(rsa_auth, identity, nickname, nn2,
key_lifetime);
- Use the same random characters for the .com and .net names. Very few domains use completely different names in the TLS certificates. (The ones that do use alternate names usually have a long list of names and not just two names.) Also, include "S", "O", and other x509 attributes in the issuer and subject records.
- Allow the torrc file to specify the common names and TLS attributes. E.g., If my Tor node resides at Digital Ocean, then I'd select information that looks like some other Digital Ocean customer. Better yet: let me supply the TLS certificate. I can supply a real one using Let's Encrypt and nobody will know that it's Tor.
- Since the certificate isn't verified anyway, include 1-2 additional certificates in the chain so it does not look like it is self-signed.
- Randomize the parameter ordering and add in some TLS extensions. Make them look less normalized.
More Soon
If you have ever worked with bug bounties, then you are certain to recognize the name Katie Moussouris. She created the first bug bounty programs at Microsoft and the Department of Defense. She was the Chief Policy Officer at HackerOne (the bug bounty service), and she spear-headed NTIA's Awareness and Adoption Group's effort to standardize vulnerability disclosure and reporting. (Full disclosure: I was part of the same NTIA working group for a year. I found Katie to be a positive and upbeat person. She is very sharp, fair-minded, and realistic.)Earlier this month, Katie was interviewed by the Vergecast podcast. I had expected her to praise the benefits of vulnerability disclosure and bug bounty programs. However, she surprised me. She has become disenchanted by how corporations are using bug bounties. She noted that corporate bug bounties have mostly been failures. Companies often prefer to outsource liability rather than solve problems. And they view the bug bounties as a way to pay for the bug and keep it quiet rather than fix the issue.
Every problem that Katie brought up about the vulnerability disclosure process echoed my experience with the Tor Project. The Tor Project made it hard to report vulnerabilities. They fail to fix vulnerabilities. They marked issues as 'resolved' when they were never fixed. They outsource simple issues, like passing a simple scrollbar issue upstream to Firefox where it is never fixed. And they make excuses for not addressing serious security issues.
During the interview, she mentioned that researchers and people reporting vulnerabilities only have a few options: try to report it, sell it, or go public. I've tried reporting and repeatedly failed. I've sold working exploits, but I also know that they can be used against me and my systems if the core issues are not fixed. (And even the people who buy exploits from me would rather have these vulverabilities fixed.) That leaves public disclosure.
In future blog posts, I will be disclosing more Tor 0day vulnerabilities. Most (but probably not all) are already known to the Tor Project. It won't be every blog entry (I also have non-Tor topics that I want to write about), but I've got a list of vulnerabilities that are ready to drop. (And for the Tor fanboys who think "use bridges" will get around this certificate profiling exploit: don't worry, I'll burn bridges next.)

Submitting a patch is much more difficult than it sounds.
1. Create the patch.
2. Join their online community so you can propose the patch.
3. Argue the need for the patch and try to convince them to include it.
4. Wait for the patch to be vetted and incorporated.
Tor Project has actual employees who develop code. In theory, it is easier to tell them the problem and let them develop the patch.
In practice: If I can't convince them of the security concern, then how could I ever convince them to incorporate the patch?
Also: (Katie brought this up during her interview.) They have employees who are developing code for them. If I'm developing code for them, then why am I not compensated (paid, medical coverage, etc.) like an employee?
I am certain that the some companies are profiling packets and blocking Tor users. (I'm certain, because I showed them how to do it.)
A few years ago, I heard about one person who was frog-marched out of the building by security for repeatedly violating the "do not use Tor" policy. (Seriously: if security comes by to lecture you about not using Tor at work, then don't continue to use Tor at work. You are not anonymous.)
That double-sided view is visible in many of your posts and in my opinion sometimes makes them not much more valuable than a bunch of words in a chain.
I defined it above: "any exploit that has no known patch or wide-spread solution." I also linked to Wikipedia as a reference -- see their first paragraph:
https://en.wikipedia.org/wiki/Zero-day_(computing)
"A zero-day vulnerability is a computer-software vulnerability that is unknown to, or unaddressed by, those who should be interested in mitigating the vulnerability."
Norton: https://us.norton.com/internetsecurity-emerging-threats-how-do-zero-day-vulnerabilities-work-30sectech.html
"it also means an official patch or update to fix the issue hasn't been released."
Fireeye: https://www.fireeye.com/current-threats/what-is-a-zero-day-exploit.html
"A zero-day attack happens once that flaw, or software/hardware vulnerability, is exploited and attackers release malware before a developer has an opportunity to create a patch to fix the vulnerability"
The folks at CMU point out that there are many definitions for 0day. https://insights.sei.cmu.edu/cert/2015/07/like-nailing-jelly-to-the-wall-difficulties-in-defining-zero-day-exploit.html
My definition is consistent with definitions 3, 4, 6, and 10.
The term itself comes from the world of virology. It was later adopted by anti-virus vendors. You start counting forward when there is a solution.
Day 1: A solution is available. But lots of people may not know about it yet.
Day 2: More people learn about the solution and apply it.
Day n: The point when nearly everyone has the solution.
Because you don't count forward until there is a solution, an active vulnerability -- even one that is years old -- is still at day-zero (0day).
The same goes for the corona virus. "Patient 0" (the first person infected) was over a year ago. But a cure or inoculation is still at day-zero. However, because there is a known workaround (shelter in place, wear a mask, social distancing), we (in the United States) are past day-1, but there are not enough people practicing it to be widely effective yet -- we're far from day-n.
Most people think of 0day vulnerabilities as things with payloads or direct attacks. However, preventing people from connection to the Tor network is a denial-of-service exploit. A DoS without a solution is an 0day.
Similarly, if the purpose of the tool is to deter tracking and profiling, and a vulnerability exists that permits tracking and profiling, then it is an attack on the tool's functionality. The scrollbar profiling issue is an 0day that performs information leakage. This is the same class of attack as hacking into a corporation and stealing a copy of their database.
bad news for tor users. I cant imagine a legit org responding this way to something that should be routine. Get same from mozilla, reddit mods, hackernews is the same.
https://archive.is/SoybE
People they dont want to work with get mistreated in plausibln deniable ways, gosh they are so busy....anyone who might be able to quesstion them will eventually leave or lash out and discredit themselves.
Well tor like much of the internet was a gov project might as well really accept that deep down.
ESNI (Encrypted Server Name Indication; encrypted client hello) should address the name profiling in the client-hello. I didn't mention this in this blog entry, but it's also very useful for detecting direct Tor connections. I covered this vulnerability a few years ago: https://hackerfactor.com/blog/index.php?/archives/790-Security-Its-in-the-Name.html
However, I don't think ESNI encrypts the server's certificate -- which is what I'm using to detect the connection.
EXCELLENT! So there is a complete solution!