TLS Fingerprint основная проблема BAS

Поддержка
  • @clarabellerising said in TLS Fingerprint основная проблема BAS:

    @rsgmsk hochesta u viska pokrutit palcem:) ostavte svoyu shizu pri sebe, vse otvechayushie vam ludi okazalis dlya vas debilami, nu a esli vokrug vonyaet govnom... esli vi tak verite v to chto TLS chromium'a basa PALITSA sdelayte mitm s obertkoy curl-impersonate, zakroyte temu i ne pozortes

    Я ничем не позорюсь! Я расписал основною проблему. Особо много ума не нужно чтобы понять что одинаковый tls отпечаток это палево. TLS фулер это лишняя нагрузка и лишние ресурсы, не говорю о дополнительный задержке в траффике.

  • @rsgmsk hahah, odinakoviy tls otpechatok palevo, smeshno, delo v authentichnosti TLS browsera, a ne ego unikalnogo tls, esli tls ja3(kotoriy mojet randomitsa v novih versiah izza random tls extension) ja4 autentichen browser'y znachit eto browser, a esli ego hash autentichen curl/axios/blablabla znachit eto bot, i delo voobshe ne v unikalnosti
    Vam uje skazali reshenie, ne ispolzuyte firefox, current browser version fingerprint = bas version => profit, vse ne nado izobretat velosiped, hotite firefox/unikalniy TLS blablabla delayte svoe reshenie

  • A to tak voznikaet oshushenie chto vi voobse ne oduplyaete chto takoe TLS fingerprint, nujno randomit prosto dlya unikalnosti, potomuchto kajdiy polzovatel doljen bit unikalen, a i kstati... a kak naschet passive os fingerprint? etot vopros vas ne zabotit?

  • I worked hard for about 3 months and built my own system, from proxy to everything, but I could not do it.

    Because

    ⭐ WHY "NO PROXY IN THE WORLD" CAN MATCH CHROME’S EXACT JA3 + TCP/IP + QUIC SIGNATURE

    ⚠️ Short answer:
    Because Chrome’s JA3, TCP stack, QUIC engine, TLS extensions, packet timings, congestion control, and OS-level fingerprints all depend on deep system-level behavior.
    These things cannot be fully mimicked by a proxy because:

    👉 A proxy does not replace the browser — it only forwards requests.

    👉 The browser’s TCP/TLS/QUIC handshake signals are generated by the client machine.

    👉 A proxy can only modify the server-side handshake (the connection between proxy and server), not the client-side handshake (the connection between browser and proxy).

    Below is a deeper breakdown 👇

    🔥 1. JA3 fingerprint = the browser’s TLS ClientHello

    A JA3 fingerprint is built from the TLS ClientHello and includes:

    Chrome’s internal TLS library

    Chrome’s version-specific cipher ordering

    Chrome’s supported groups (elliptic curves)

    Chrome’s GREASE values/extensions

    Chrome’s signature algorithms

    Chrome’s ALPN list (http/1.1, h2, h3)

    The extension ordering and randomness that Chrome uses

    All of these are hard-coded and/or dynamically set inside Chrome.

    Why a proxy fails to mimic the client JA3:
    A proxy cannot replace the browser’s ClientHello. Instead, the proxy will create a new TLS handshake to the target server. That handshake is the proxy’s JA3, not the browser’s. The server can observe:

    ClientHello #1: Client → Proxy (original)

    ClientHello #2: Proxy → Server (spoofed)

    This mismatch is detectable by server-side ML/heuristics and is a strong bot signal.

    🔥 2. Chrome’s TCP/IP stack fingerprint

    TCP/IP fingerprints arise from OS/network stack behavior, including:

    Initial window size

    TCP timestamps

    TTL (Time To Live)

    Retransmission behavior and intervals

    Congestion control algorithm (e.g., BBR, CUBIC)

    Packet pacing and ACK timing

    MTU discovery behavior

    Zero-window probes

    OS kernel networking quirks (Linux vs Windows vs macOS)

    These characteristics originate on the client machine — NOT on the proxy. A proxy cannot rewrite your OS-level TCP behavior (except by extreme kernel-level MITM modifications). Servers can use those signals to detect that a request didn’t come from a normal Chrome/TCP stack.

    🔥 3. QUIC fingerprinting (HTTP/3) = IMPOSSIBLE to fully spoof

    Chrome uses its own QUIC stack (cronet) with:

    Version-specific QUIC crypto parameters

    qlog-like packet sequencing and timing

    QUIC packet pacing and loss-recovery behavior

    ACK delay strategies

    QPACK encoder/decoder behavior

    Each Chrome version has a characteristic QUIC signature. A proxy cannot fake the browser’s end-to-end QUIC behavior because QUIC is encrypted end-to-end and tightly integrated with the client implementation. A proxy would either need to terminate and re-establish QUIC (producing a different signature) or force the connection to downgrade to HTTP/1.1 — both of which are detectable.

    Common detection behavior:

    Real Chrome → speaks QUIC (HTTP/3)

    Bot/proxy → falls back to HTTP/1.1 or shows inconsistent QUIC behavior

    🔥 4. Chrome’s TLS GREASE

    Chromium implements GREASE (as in RFC 8701) — intentionally randomizing certain reserved values (extension ids, signature algorithms, etc.) to future-proof and prevent ossification. GREASE values are:

    Random per connection and build

    Version-dependent and platform-dependent

    Proxies can’t reliably predict or reproduce Chrome’s GREASE behavior. Missing or incorrect GREASE patterns are a signal that the client is not a genuine Chrome build.

    🔥 5. Chrome header ordering (very important)

    Real Chrome emits request headers in a particular order and with expected fields (especially over HTTP/2). Example ordering (illustrative):

    :authority
    :method
    :path
    :scheme
    sec-ch-ua
    sec-ch-ua-mobile
    sec-ch-ua-platform
    upgrade-insecure-requests
    user-agent
    accept
    sec-fetch-site
    sec-fetch-mode
    sec-fetch-user
    sec-fetch-dest
    accept-encoding
    accept-language

    Proxies commonly:

    ❌ Change header order

    ❌ Add or remove encodings

    ❌ Inject proxy-specific headers

    ❌ Re-chunk or alter HTTP/2 frames

    Such differences can be used to detect non-standard clients. Even advanced antidetect browsers sometimes fail to perfectly match header framing and ordering.

    🔥 6. Chrome’s OS-level artifacts

    A real browser request carries many environment-dependent artifacts that a proxy cannot emulate:

    Platform-dependent behavior and heuristics

    Built-in compression/windowing patterns

    HTTP/2 frame pacing and stream behaviors

    Huffman encoding usage patterns in headers

    Per-build TLS randomization nuances

    A proxy cannot reproduce these low-level, OS/timing/implementation artifacts originating from the client runtime.

  • @ranjeet

    Take ChatGPT with a grain of salt and use your own head. The stuff above is only half right.

  • i am used chatgpt only for typing becuse my english is not good

  • @sergerdn
    i am not chromium devloper me be some deatils will wrong becuse i am using ai for research only i am tried to under stand how chromium genrating tsl fb.
    i am 4mohtn after devloped my won proxy network with tsl spoofing but my account are still banning i am using total humen for page visit and do task

  • @rsgmsk said in TLS Fingerprint основная проблема BAS:

    TLS отпечаток он разный у людей

    @rsgmsk said in TLS Fingerprint основная проблема BAS:

    одинаковый tls отпечаток это палево

    Не путай теплое с мягким:

    • настройки HTTP-клиента браузера (TLS-отпечаток, порядок расширений, заголовки)
    • JS/device-fingerprint (canvas, WebGL, шрифты и т.д.)

    Представь что TLS-отпечаток - это как SHA256(версия браузера/приложения/User-Agent).
    Одинаковый TLS-отпечаток говорит, что клиент ведёт себя как оригинальная версия браузера/приложения и не конфликтует например с User-Agent

  • @Int64 bad library.

  • @tersinemuhendis any proof?

  • I’ve started studying low-level networking to understand full network behavior in depth. The clear conclusion so far is that a TLS fingerprint cannot be formed solely by BAS (Chromium). TLS fingerprinting also depends on system-level OS behavior, where the browser + OS + network stack + hardware timing together create the final observable fingerprint.

    In this process, the protocol-level fingerprint (from Chrome/Chromium) stays consistent, but OS-level packet timing, TCP implementation, congestion control, jitter, and data buffering influence the actual TLS identity.

    I do see one possible direction — but this would require building a larger ecosystem where the proxy environment can either emulate or preserve OS-level network characteristics. That’s ongoing R&D, and we’ll see how it evolves.

    At least one confusion is now clearly resolved: as per my current knowledge, modifying only BAS (Chromium) to produce different realistic TLS fingerprints does not seem practically feasible unless the OS-level signals are also replicated or manipulated.

    (all are translated hindi to english )

  • @tersinemuhendis My module is the same)

  • отличная информация, спасибо, ребята!

  • @Int64 Having the same boardingssl doesn't mean you're doing the same operations, my friend. Chrome also uses boardingssl, so that doesn't mean your module works the same with Chrome.

  • @tersinemuhendis omg 😲 read the module description carefully or give me real facts

  • @Int64 пусть изобретают свой велосипед, не трать время на них.

  • @usertrue said in TLS Fingerprint основная проблема BAS:

    @Int64 пусть изобретают свой велосипед, не трать время на них.

    //режим старпера on
    Люди слишком доверяют своему мнению, если взять и проверить снаружи свою гипотезу, то будет в сразу видно что происходит.
    //режим старпера off