Merge changes from upstream

This commit is contained in:
Vecna 2023-11-14 14:07:29 -05:00
commit 0bf3b30669
22 changed files with 512 additions and 182 deletions

8
Cargo.lock generated
View File

@ -1862,18 +1862,18 @@ checksum = "b0293b4b29daaf487284529cc2f5675b8e57c61f70167ba415a463651fd6a918"
[[package]]
name = "serde"
version = "1.0.190"
version = "1.0.192"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "91d3c334ca1ee894a2c6f6ad698fe8c435b76d504b13d436f0685d648d6d96f7"
checksum = "bca2a08484b285dcb282d0f67b26cadc0df8b19f8c12502c13d966bf9482f001"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.190"
version = "1.0.192"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "67c5609f394e5c2bd7fc51efda478004ea80ef42fee983d5c67a65e34f32c0e3"
checksum = "d6c7207fbec9faa48073f3e3074cbe553af6ea512d7c21ba46e434e70ea9fbc1"
dependencies = [
"proc-macro2",
"quote",

View File

@ -4,9 +4,13 @@ version = "0.1.0"
authors = ["The Tor Project, Inc.", "Lindsey Tulloch <onyinyang@torproject.org", "Cecylia Bocovich <cohosh@torproject.org>"]
edition = "2021"
rust-version = "1.65.0"
homepage = "https://gitlab.torproject.org/tpo/anti-censorship/lox-project/~/wikis/home"
description = "Tool for receving resources from rdsys and distributing them to users"
keywords = ["tor", "lox", "bridges"]
license = "MIT"
homepage = "https://gitlab.torproject.org/tpo/anti-censorship/lox/-/wikis/home"
description = "Tool for receving Tor bridges from rdsys and distributing them to users"
keywords = ["tor", "lox", "bridges","censorship-resistance"]
categories = ["web-programming::http-server"]
repository = "https://gitlab.torproject.org/tpo/anti-censorship/lox/-/tree/main/crates/lox-distributor"
readme = "https://gitlab.torproject.org/tpo/anti-censorship/lox/-/blob/main/crates/lox-distributor/README.md"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

View File

@ -2,40 +2,107 @@
The Lox distributor receives resources from [rdsys](https://gitlab.torproject.org/tpo/anti-censorship/rdsys) and writes them to [Lox
BridgeLines](https://git-crysp.uwaterloo.ca/iang/lox/src/master/src/bridge_table.rs#L42). Concurrently, it receives and responds to requests from [Lox clients](https://gitlab.torproject.org/tpo/anti-censorship/lox/lox-wasm). It saves the [LoxContext](https://gitlab.torproject.org/tpo/anti-censorship/lox-rs/-/blob/main/crates/lox-distributor/src/lox_context.rs) to a database every time the Lox bridgetable is updated and before the distributor is shutdown.
## Configure rdsys stream
A test `config.json` is included for testing on a local instance of rdsys. This
can be edited to correspond to the desired types of resources, endpoints and database configuration.
## Test Run
## Configuration
For testing purposes, you will need a running instance of rdsys as well as a running Lox client.
A test `config.json` is included for testing on a local instance of rdsys. There are several configurable
fields in this config file:
### DB Config
The DB config `db` accepts a `db_path` where the Lox distributor will look for or create a new Lox database as follows:
```
"db": {
"db_path": "path/to/db"
}
```
### Rdsys Config
The rdsys request `rtype` has the following fields:
`endpoint` the endpoint of the rdsys instance that the distributor will make requests to,
`name` the type of distributor we are requesting. In most cases this should be `lox`,
`token` the corresponding Api Token,
`types` the type of bridges that are being accepted.
Example configuration:
```
"rtype": {
"endpoint": "http://127.0.0.1:7100/resources",
"name": "lox",
"token": "LoxApiTokenPlaceholder",
"types": [
"obfs4",
"scramblesuit"
]
}
```
### Bridge Config
The Bridge config, `bridge_config` has the following fields:
`watched_blockages` lists the regions (as ISO 3166 country codes) that Lox will monitor for listed blockages
`percent_spares` is the percentage of buckets that should be allocated as hot spares (as opposed to open invitation buckets)
Example configuration:
```
"bridge_config": {
"watched_blockages": [
"RU"
],
"percent_spares": 50
},
```
### Metrics Port
The `metrics_port` field is the port that the prometheus server will run on.
### Command Line Arguments for Advanced Database Config
There are a few configurations for the Lox database that can be passed as arguments at run time since they are not likely to be suitable as persistent configuration options.
Rolling back to a previous version of the database is possible by passing the
`roll_back_date` flag at runtime and providing the date/time as a `%Y-%m-%d_%H:%M:%S` string. This argument should be passed if the `LoxContext` should be rolled back to a previous state due to, for example, a mass blocking event that is likely not due to Lox user behaviour. If the exact roll back date/time is not known, the last db entry within 24 hours from the passed `roll_back_date` will be used or else the program will fail gracefully.
## Distributor Staging Environnment
The lox distributor is currently deployed for testing on `rdsys-frontend-01`.
Client requests can be made to this distributor by following the instructions in the [`lox-wasm` README](../lox-wasm/README.md/#testing)
## Running the Lox Distributor Locally
For testing purposes, you will need a locally running instance of [rdsys](https://gitlab.torproject.org/tpo/anti-censorship/rdsys) as well as a running [Lox client](../lox-wasm/).
### Run rdsys locally
First clone rdsys from [here](https://gitlab.torproject.org/tpo/anti-censorship/rdsys) then change into the backend directory:
```
cd rdsys/cmd/backend
```
Finally run rdsys:
```
./backend --config config.json
```
## Database Config
The database has a few configuration options. The path for where the database
should be read/written can be specified in the `config.json`. Rolling back to a
previous version of the database is also possible by passing the
`roll_back_date` flag at runtime and providing the date/time as a `%Y-%m-%d_%H:%M:%S` string. This argument should be passed if the `LoxContext` should be rolled back to a previous state due to, for example, a mass blocking event that is likely not due to Lox user behaviour. If the exact roll back date/time is not known, the last db entry within 24 hours from the passed `roll_back_date` will be used or else the program will fail gracefully.
First clone rdsys from [here](https://gitlab.torproject.org/tpo/anti-censorship/rdsys) then follow the instructions in the [README](https://gitlab.torproject.org/tpo/anti-censorship/rdsys/-/blob/main/README.md) to create a locally running rdsys instance with fake bridge descriptors.
### Run Lox Distributor locally
Simply run `cargo run -- config.json` :)
The easiest way to test with rdsys is to adjust the [config.json](config.json) so that the `rtype` reads as follows:
```
"rtype": {
"endpoint": "http://127.0.0.1:7100/resources",
"name": "https",
"token": "HttpsApiTokenPlaceholder",
"types": [
"obfs4",
"snowflake"
]
}
```
Then simply run `cargo run -- config.json` :)
### Run a Lox client locally
First clone lox-wasm from [here](https://gitlab.torproject.org/tpo/anti-censorship/lox/lox-wasm). Follow the instructions in the [README](https://gitlab.torproject.org/tpo/anti-censorship/lox/lox-wasm/-/blob/main/README.md) to build and test the Lox client.
First clone lox-wasm from [here](https://gitlab.torproject.org/tpo/anti-censorship/lox/lox-wasm). Follow the instructions in the [README](../lox-wasm/README.md) to build and test the Lox client.

View File

@ -5,15 +5,18 @@
},
"metrics_port": 5222,
"bridge_config": {
"watched_blockages": [
"RU"
],
"percent_spares": 50
},
"rtype": {
"endpoint": "http://127.0.0.1:7100/resources",
"name": "https",
"token": "HttpsApiTokenPlaceholder",
"name": "lox",
"token": "LoxApiTokenPlaceholder",
"types": [
"obfs2",
"scramblesuit"
"obfs4",
"snowflake"
]
}
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -57,27 +57,27 @@ impl LoxServerContext {
}
}
pub fn handle_working_resources(&self, working_resources: Vec<Resource>) -> Vec<u64> {
pub fn handle_working_resources(
&self,
watched_blockages: Vec<String>,
working_resources: Vec<Resource>,
) -> Vec<u64> {
let mut accounted_for_bridges: Vec<u64> = Vec::new();
let bridgelines = parse_into_bridgelines(working_resources);
let (bridgelines, blocked_bridgelines) =
parse_into_bridgelines(watched_blockages, working_resources);
for bridge in blocked_bridgelines {
let res = self.mark_blocked(bridge);
if res {
println!("BridgeLine {:?} successfully marked unreachable", bridge);
self.metrics.blocked_bridges.inc();
} else {
println!(
"BridgeLine {:?} NOT marked unreachable, not found in bridgetable!",
bridge.uid_fingerprint
);
}
}
for bridge in bridgelines {
/* TODO: Functionality for marking bridges as unreachable/blocked should eventually happen here.
It is currently not enabled as there is not yet a reliable way to determine that a bridge is blocked.
This means that migrations to unblocked bridges do not currently work but can be easily enabled by parsing the
list of `blocked resources` from rdsys or another source with something like the following:
let res = context.add_unreachable(bridgeline);
if res {
println!(
"BridgeLine {:?} successfully marked unreachable: {:?}",
bridgeline
);
} else {
println!(
"BridgeLine {:?} NOT marked unreachable, saved for next update!",
bridge.uid_fingerprint
);
}
*/
let res = self.update_bridge(bridge);
if res {
println!(
@ -96,12 +96,30 @@ impl LoxServerContext {
accounted_for_bridges
}
// When syncing resources with rdsys, handle the non-working resources
// Those that are blocked in the target region are marked as unreachable/blocked
// All others are matched by fingerprint and if they are still in the grace period, they are updated
// otherwise they are replaced with new bridges
pub fn handle_not_working_resources(
&self,
watched_blockages: Vec<String>,
not_working_resources: Vec<Resource>,
mut accounted_for_bridges: Vec<u64>,
) -> Vec<u64> {
let (grace_period, failing) = sort_for_parsing(not_working_resources);
let (grace_period, failing, blocked) =
sort_for_parsing(watched_blockages, not_working_resources);
for bridge in blocked {
let res = self.mark_blocked(bridge);
if res {
println!("BridgeLine {:?} successfully marked unreachable", bridge);
self.metrics.blocked_bridges.inc();
} else {
println!(
"BridgeLine {:?} NOT marked unreachable, not found in bridgetable!",
bridge.uid_fingerprint
);
}
}
// Update bridges in the bridge table that are failing but within the grace period
for bridge in grace_period {
let res = self.update_bridge(bridge);
@ -151,18 +169,22 @@ impl LoxServerContext {
}
// Sync resources received from rdsys with the Lox bridgetable
pub fn sync_with_bridgetable(&self, resources: ResourceState) {
pub fn sync_with_bridgetable(&self, watched_blockages: Vec<String>, resources: ResourceState) {
// Check if each resource is already in the Lox bridgetable. If it is, it's probably fine
// to replace the existing resource with the incoming one to account for changes
// save a list of accounted for bridges and deal with the unaccounted for bridges at the end
let mut accounted_for_bridges: Vec<u64> = Vec::new();
// ensure all working resources are updated and accounted for
if let Some(working_resources) = resources.working {
accounted_for_bridges = self.handle_working_resources(working_resources);
accounted_for_bridges =
self.handle_working_resources(watched_blockages.clone(), working_resources);
}
if let Some(not_working_resources) = resources.not_working {
accounted_for_bridges =
self.handle_not_working_resources(not_working_resources, accounted_for_bridges);
accounted_for_bridges = self.handle_not_working_resources(
watched_blockages,
not_working_resources,
accounted_for_bridges,
);
}
let mut ba_clone = self.ba.lock().unwrap();
let total_reachable = ba_clone.bridge_table.reachable.len();
@ -239,6 +261,9 @@ impl LoxServerContext {
to_be_replaced_bridges.push(bridge);
}
// Add extra_bridges to the Lox bridge table as open invitation bridges
// TODO: Add some consideration for whether or not bridges should be sorted as
// open invitation buckets or hot spare buckets
pub fn allocate_leftover_bridges(&self) {
let mut ba_obj = self.ba.lock().unwrap();
let mut db_obj = self.db.lock().unwrap();
@ -246,6 +271,7 @@ impl LoxServerContext {
ba_obj.allocate_bridges(&mut extra_bridges, &mut db_obj);
}
// Add an open invitation bucket to the Lox db
pub fn add_openinv_bucket(&self, bucket: [BridgeLine; 3]) {
let mut ba_obj = self.ba.lock().unwrap();
let mut db_obj = self.db.lock().unwrap();
@ -260,6 +286,7 @@ impl LoxServerContext {
}
}
// Add a hot spare bucket to the Lox db
pub fn add_spare_bucket(&self, bucket: [BridgeLine; 3]) {
let mut ba_obj = self.ba.lock().unwrap();
let mut db_obj = self.db.lock().unwrap();
@ -289,14 +316,16 @@ impl LoxServerContext {
result
}
/* TODO: Uncomment when bridge blocking is finalized
pub fn add_unreachable(&self, bridgeline: BridgeLine) -> bool {
let mut ba_obj = self.ba.lock().unwrap();
let mut db_obj = self.db.lock().unwrap();
ba_obj.bridge_unreachable(&bridgeline, &mut db_obj)
}
*/
pub fn mark_blocked(&self, bridgeline: BridgeLine) -> bool {
let mut ba_obj = self.ba.lock().unwrap();
let mut db_obj = self.db.lock().unwrap();
ba_obj.bridge_blocked(&bridgeline, &mut db_obj)
}
// Find the bridgeline in the Lox bridge table that matches the fingerprint
// of the bridgeline passed by argument. Once found, replace it with the bridgeline
// passed by argument to ensure all fields besides the fingerprint are updated
// appropriately.
pub fn update_bridge(&self, bridgeline: BridgeLine) -> bool {
let mut ba_obj = self.ba.lock().unwrap();
ba_obj.bridge_update(&bridgeline)
@ -311,11 +340,13 @@ impl LoxServerContext {
println!("Today's date according to server: {}", ba_obj.today());
}
// Encrypts the Lox bridge table, should be called after every sync
pub fn encrypt_table(&self) -> HashMap<u32, EncryptedBucket> {
let mut ba_obj = self.ba.lock().unwrap();
ba_obj.enc_bridge_table().clone()
}
// Returns a vector of the Lox Authority's public keys
fn pubkeys(&self) -> Vec<IssuerPubKey> {
let ba_obj = self.ba.lock().unwrap();
// vector of public keys (to serialize)
@ -332,6 +363,8 @@ impl LoxServerContext {
self.ba.lock().unwrap().today()
}
// Generates a Lox invitation if fewer than MAX_BRIDGES_PER_DAY have been
// requested on a given day
fn gen_invite(&self) -> Result<lox_utils::Invite, ExceededMaxBridgesError> {
let mut obj = self.db.lock().unwrap();
match obj.invite() {
@ -345,11 +378,13 @@ impl LoxServerContext {
}
}
// Returns a valid open_invite::Response if the open_invite::Request is valid
fn open_inv(&self, req: open_invite::Request) -> Result<open_invite::Response, ProofError> {
let mut ba_obj = self.ba.lock().unwrap();
ba_obj.handle_open_invite(req)
}
// Returns a valid trust_promotion:: Response if the trust_promotion::Request is valid
fn trust_promo(
&self,
req: trust_promotion::Request,
@ -358,16 +393,19 @@ impl LoxServerContext {
ba_obj.handle_trust_promotion(req)
}
// Returns a valid trust_migration::Response if the trust_migration::Request is valid
fn trust_migration(&self, req: migration::Request) -> Result<migration::Response, ProofError> {
let mut ba_obj = self.ba.lock().unwrap();
ba_obj.handle_migration(req)
}
// Returns a valid level_up:: Response if the level_up::Request is valid
fn level_up(&self, req: level_up::Request) -> Result<level_up::Response, ProofError> {
let mut ba_obj = self.ba.lock().unwrap();
ba_obj.handle_level_up(req)
}
// Returns a valid issue_invite::Response if the issue_invite::Request is valid
fn issue_invite(
&self,
req: issue_invite::Request,
@ -376,6 +414,7 @@ impl LoxServerContext {
ba_obj.handle_issue_invite(req)
}
// Returns a valid redeem_invite::Response if the redeem_invite::Request is valid
fn redeem_invite(
&self,
req: redeem_invite::Request,
@ -384,6 +423,7 @@ impl LoxServerContext {
ba_obj.handle_redeem_invite(req)
}
// Returns a valid check_blockage::Response if the check_blockage::Request is valid
fn check_blockage(
&self,
req: check_blockage::Request,
@ -392,6 +432,7 @@ impl LoxServerContext {
ba_obj.handle_check_blockage(req)
}
// Returns a valid blockage_migration::Response if the blockage_migration::Request is valid
fn blockage_migration(
&self,
req: blockage_migration::Request,
@ -400,7 +441,7 @@ impl LoxServerContext {
ba_obj.handle_blockage_migration(req)
}
// Generate and return an open invitation token
// Generate and return an open invitation token as an HTTP response
pub fn generate_invite(self) -> Response<Body> {
self.metrics.invites_requested.inc();
let invite = self.gen_invite();
@ -419,7 +460,7 @@ impl LoxServerContext {
}
}
// Return the serialized encrypted bridge table
// Return the serialized encrypted bridge table as an HTTP response
pub fn send_reachability_cred(self) -> Response<Body> {
let enc_table = self.encrypt_table();
let etable = lox_utils::EncBridgeTable { etable: enc_table };
@ -432,7 +473,7 @@ impl LoxServerContext {
}
}
// Return the serialized pubkeys for the Bridge Authority
// Return the serialized pubkeys for the Bridge Authority as an HTTP response
pub fn send_keys(self) -> Response<Body> {
let pubkeys = self.pubkeys();
match serde_json::to_string(&pubkeys) {
@ -455,6 +496,7 @@ impl LoxServerContext {
}
}
// Verify the open invitation request and return the result as an HTTP response
pub fn verify_and_send_open_cred(self, request: Bytes) -> Response<Body> {
let req = match serde_json::from_slice(&request) {
Ok(req) => req,
@ -473,6 +515,7 @@ impl LoxServerContext {
}
}
// Verify the trust promotion request and return the result as an HTTP response
pub fn verify_and_send_trust_promo(self, request: Bytes) -> Response<Body> {
let req: trust_promotion::Request = match serde_json::from_slice(&request) {
Ok(req) => req,
@ -491,6 +534,7 @@ impl LoxServerContext {
}
}
// Verify the trust migration request and return the result as an HTTP response
pub fn verify_and_send_trust_migration(self, request: Bytes) -> Response<Body> {
let req: migration::Request = match serde_json::from_slice(&request) {
Ok(req) => req,
@ -509,6 +553,7 @@ impl LoxServerContext {
}
}
// Verify the level up request and return the result as an HTTP response
pub fn verify_and_send_level_up(self, request: Bytes) -> Response<Body> {
let req: level_up::Request = match serde_json::from_slice(&request) {
Ok(req) => req,
@ -527,6 +572,7 @@ impl LoxServerContext {
}
}
// Verify the open invitation request and return the result as an HTTP response
pub fn verify_and_send_issue_invite(self, request: Bytes) -> Response<Body> {
let req: issue_invite::Request = match serde_json::from_slice(&request) {
Ok(req) => req,
@ -545,6 +591,7 @@ impl LoxServerContext {
}
}
// Verify the redeem invite request and return the result as an HTTP response
pub fn verify_and_send_redeem_invite(self, request: Bytes) -> Response<Body> {
let req: redeem_invite::Request = match serde_json::from_slice(&request) {
Ok(req) => req,
@ -563,6 +610,7 @@ impl LoxServerContext {
}
}
// Verify the check blockage request and return the result as an HTTP response
pub fn verify_and_send_check_blockage(self, request: Bytes) -> Response<Body> {
let req: check_blockage::Request = match serde_json::from_slice(&request) {
Ok(req) => req,
@ -581,6 +629,7 @@ impl LoxServerContext {
}
}
// Verify the blockage migration request and return the result as an HTTP response
pub fn verify_and_send_blockage_migration(self, request: Bytes) -> Response<Body> {
let req: blockage_migration::Request = match serde_json::from_slice(&request) {
Ok(req) => req,
@ -612,6 +661,7 @@ impl LoxServerContext {
}
}
// Prepare HTTP Response for successful Server Request
fn prepare_header(response: String) -> Response<Body> {
let mut resp = Response::new(Body::from(response));
resp.headers_mut()
@ -619,6 +669,7 @@ fn prepare_header(response: String) -> Response<Body> {
resp
}
// Prepare HTTP Response for errored Server Request
fn prepare_error_header(error: String) -> Response<Body> {
Response::builder()
.status(hyper::StatusCode::BAD_REQUEST)

View File

@ -85,6 +85,10 @@ impl Default for DbConfig {
// Config information for how bridges should be allocated to buckets
#[derive(Debug, Default, Deserialize)]
pub struct BridgeConfig {
// A list of regions (as ISO 3166 country codes) that Lox will monitor resources for.
// Any region indicated here that is listed in the `blocked_in` field of a resource will be marked as
// blocked by Lox's bridge authority.
watched_blockages: Vec<String>,
// The percentage of buckets (made up of MAX_BRIDGES_PER_BUCKET bridges)
// that should be allocated as spare buckets
// This will be calculated as the floor of buckets.len() * percent_spares / 100
@ -100,9 +104,9 @@ struct ResourceInfo {
}
// Populate Bridgedb from rdsys
// Rdsys sender creates a ResourceStream with the api_endpoint, resource token and type specified
// Rdsys sender creates a Resource request with the api_endpoint, resource token and type specified
// in the config.json file.
async fn rdsys_stream(
async fn rdsys_request_creator(
rtype: ResourceInfo,
tx: mpsc::Sender<ResourceState>,
mut kill: broadcast::Receiver<()>,
@ -114,6 +118,8 @@ async fn rdsys_stream(
}
}
// Makes a request to rdsys for the full set of Resources assigned to lox every interval
// (defined in the function)
async fn rdsys_request(rtype: ResourceInfo, tx: mpsc::Sender<ResourceState>) {
let mut interval = interval(Duration::from_secs(5));
loop {
@ -130,6 +136,7 @@ async fn rdsys_request(rtype: ResourceInfo, tx: mpsc::Sender<ResourceState>) {
}
}
// Parse bridges received from rdsys and sync with Lox context
async fn rdsys_bridge_parser(
rdsys_tx: mpsc::Sender<Command>,
rx: mpsc::Receiver<ResourceState>,
@ -152,6 +159,7 @@ async fn parse_bridges(rdsys_tx: mpsc::Sender<Command>, mut rx: mpsc::Receiver<R
}
}
// Create a prometheus metrics server
async fn start_metrics_collector(
metrics_addr: SocketAddr,
registry: Registry,
@ -203,7 +211,10 @@ async fn context_manager(
// bridgetable with all of the working bridges received from rdsys.
if context.bridgetable_is_empty() {
if let Some(working_resources) = resources.working {
let bridgelines = parse_into_bridgelines(working_resources);
let (bridgelines, _) = parse_into_bridgelines(
bridge_config.watched_blockages.clone(),
working_resources,
);
context.metrics.new_bridges.inc_by(bridgelines.len() as u64);
let (buckets, leftovers) = parse_into_buckets(bridgelines);
for leftover in leftovers {
@ -218,7 +229,8 @@ async fn context_manager(
// If bridges are labelled as blocked_in, we should also handle blocking behaviour.
}
} else {
context.sync_with_bridgetable(resources);
context
.sync_with_bridgetable(bridge_config.watched_blockages.clone(), resources);
}
// Handle any bridges that are leftover in the bridge authority from the sync
context.allocate_leftover_bridges();
@ -314,7 +326,7 @@ async fn main() {
});
let (tx, rx) = mpsc::channel(32);
let rdsys_request_handler = spawn(async { rdsys_stream(config.rtype, tx, kill_stream).await });
let rdsys_request_handler = spawn(async { rdsys_request_creator(config.rtype, tx, kill_stream).await });
let rdsys_resource_receiver =
spawn(async { rdsys_bridge_parser(rdsys_tx, rx, kill_parser).await });

View File

@ -161,7 +161,7 @@ pub async fn start_metrics_server(metrics_addr: SocketAddr, registry: Registry)
.unwrap();
}
/// This function returns a HTTP handler (i.e. another function)
/// This function returns an HTTP handler (i.e. another function)
pub fn make_handler(
registry: Arc<Registry>,
) -> impl Fn(Request<Body>) -> Pin<Box<dyn Future<Output = io::Result<Response<Body>>> + Send>> {

View File

@ -259,8 +259,8 @@ mod tests {
.unwrap();
assert!(bucket.1.is_some());
// Block two of our bridges
lox_auth.bridge_unreachable(&bucket.0[0], &mut bdb);
lox_auth.bridge_unreachable(&bucket.0[2], &mut bdb);
lox_auth.bridge_blocked(&bucket.0[0], &mut bdb);
lox_auth.bridge_blocked(&bucket.0[2], &mut bdb);
(cred, id, key)
}

View File

@ -1,12 +1,20 @@
use std::process::exit;
use chrono::{Duration, Utc};
use lox_library::bridge_table::{BridgeLine, BRIDGE_BYTES, MAX_BRIDGES_PER_BUCKET};
use rdsys_backend::proto::Resource;
pub const ACCEPTED_HOURS_OF_FAILURE: i64 = 3;
// Parse each resource from rdsys into a Bridgeline as expected by the Lox Bridgetable
pub fn parse_into_bridgelines(resources: Vec<Resource>) -> Vec<BridgeLine> {
// Parse each resource from rdsys into a Bridgeline as expected by the Lox Bridgetable and return
// Bridgelines as two vectors, those that are marked as blocked in a specified region (indicated in the config file)
// and those that are not blocked.
pub fn parse_into_bridgelines(
watched_blockages: Vec<String>,
resources: Vec<Resource>,
) -> (Vec<BridgeLine>, Vec<BridgeLine>) {
let mut bridgelines: Vec<BridgeLine> = Vec::new();
let mut blocked_bridgelines: Vec<BridgeLine> = Vec::new();
for resource in resources {
let mut ip_bytes: [u8; 16] = [0; 16];
ip_bytes[..resource.address.len()].copy_from_slice(resource.address.as_bytes());
@ -14,27 +22,38 @@ pub fn parse_into_bridgelines(resources: Vec<Resource>) -> Vec<BridgeLine> {
.get_uid()
.expect("Unable to get Fingerprint UID of resource");
let infostr: String = format!(
"type={} blocked_in={:?} protocol={} fingerprint={:?} or_addresses={:?} distribution={} flags={:?} params={:?}",
resource.r#type,
resource.blocked_in,
resource.protocol,
resource.fingerprint,
resource.or_addresses,
resource.distribution,
resource.flags,
resource.params,
);
"type={} fingerprint={:?} params={:?}",
resource.r#type, resource.fingerprint, resource.params,
);
let mut info_bytes: [u8; BRIDGE_BYTES - 26] = [0; BRIDGE_BYTES - 26];
info_bytes[..infostr.len()].copy_from_slice(infostr.as_bytes());
bridgelines.push(BridgeLine {
addr: ip_bytes,
port: resource.port,
uid_fingerprint: resource_uid,
info: info_bytes,
})
let mut blocked = false;
for watched_blockage in watched_blockages.clone() {
if let Some(blockage) = resource.blocked_in.get(&watched_blockage) {
if *blockage {
blocked = true;
break;
}
}
}
if blocked {
blocked_bridgelines.push(BridgeLine {
addr: ip_bytes,
port: resource.port,
uid_fingerprint: resource_uid,
info: info_bytes,
});
} else {
bridgelines.push(BridgeLine {
addr: ip_bytes,
port: resource.port,
uid_fingerprint: resource_uid,
info: info_bytes,
});
}
}
bridgelines
(bridgelines, blocked_bridgelines)
}
// Allocate each Bridgeline into a bucket that will later be allocated into spare buckets or open invitation buckets
@ -73,12 +92,16 @@ pub fn parse_into_buckets(
(buckets, leftovers)
}
// Sort Resources into those that are functional and those that are failing based on the last time
// they were passing tests. Before passing them back to the calling function, they are parsed into
// BridgeLines
pub fn sort_for_parsing(resources: Vec<Resource>) -> (Vec<BridgeLine>, Vec<BridgeLine>) {
// Sort Resources into those that are functional, those that are failing based on the last time
// they were passing tests, and those that are blocked in the region(s) specified in the config file.
// Before passing them back to the calling function, they are parsed into BridgeLines
pub fn sort_for_parsing(
watched_blockages: Vec<String>,
resources: Vec<Resource>,
) -> (Vec<BridgeLine>, Vec<BridgeLine>, Vec<BridgeLine>) {
let mut grace_period: Vec<Resource> = Vec::new();
let mut failing: Vec<Resource> = Vec::new();
let mut blocked: Vec<BridgeLine> = Vec::new();
for resource in resources {
// TODO: Maybe filter for untested resources first if last_passed alone would skew
// the filter in an unintended direction
@ -90,10 +113,14 @@ pub fn sort_for_parsing(resources: Vec<Resource>) -> (Vec<BridgeLine>, Vec<Bridg
failing.push(resource);
}
}
let grace_period_bridgelines = parse_into_bridgelines(grace_period);
let failing_bridgelines = parse_into_bridgelines(failing);
let (grace_period_bridgelines, mut grace_period_blocked) =
parse_into_bridgelines(watched_blockages.clone(), grace_period);
let (failing_bridgelines, mut failing_blocked) =
parse_into_bridgelines(watched_blockages, failing);
blocked.append(&mut grace_period_blocked);
blocked.append(&mut failing_blocked);
(grace_period_bridgelines, failing_bridgelines)
(grace_period_bridgelines, failing_bridgelines, blocked)
}
#[cfg(test)]
@ -107,6 +134,7 @@ mod tests {
pub fn make_resource(
rtype: String,
blocked_in: HashMap<String, bool>,
address: String,
port: u16,
fingerprint: String,
@ -122,7 +150,7 @@ mod tests {
);
Resource {
r#type: String::from(rtype),
blocked_in: HashMap::new(),
blocked_in: blocked_in,
test_result: TestResults {
last_passed: Utc::now() - Duration::hours(last_passed),
},
@ -141,6 +169,13 @@ mod tests {
fn test_sort_for_parsing() {
let resource_one = make_resource(
"scramblesuit".to_owned(),
HashMap::from([
("AS".to_owned(), false),
("IR".to_owned(), false),
("PS".to_owned(), false),
("CN".to_owned(), false),
("RU".to_owned(), false),
]),
"123.456.789.100".to_owned(),
3002,
"BE84A97D02130470A1C77839954392BA979F7EE1".to_owned(),
@ -148,6 +183,13 @@ mod tests {
);
let resource_two = make_resource(
"https".to_owned(),
HashMap::from([
("AI".to_owned(), false),
("AG".to_owned(), false),
("BD".to_owned(), false),
("BB".to_owned(), false),
("RU".to_owned(), false),
]),
"123.222.333.444".to_owned(),
6002,
"C56B9EF202130470A1C77839954392BA979F7FF9".to_owned(),
@ -155,13 +197,27 @@ mod tests {
);
let resource_three = make_resource(
"scramblesuit".to_owned(),
"444.888.222.100".to_owned(),
HashMap::from([
("SZ".to_owned(), true),
("DO".to_owned(), false),
("GN".to_owned(), false),
("KR".to_owned(), false),
("RU".to_owned(), false),
]),
"443.288.222.100".to_owned(),
3042,
"1A4C8BD902130470A1C77839954392BA979F7B46".to_owned(),
"5E3A8BD902130470A1C77839954392BA979F7B46".to_owned(),
4,
);
let resource_four = make_resource(
"https".to_owned(),
HashMap::from([
("SH".to_owned(), true),
("ZA".to_owned(), true),
("UM".to_owned(), true),
("ZW".to_owned(), true),
("SK".to_owned(), true),
]),
"555.444.212.100".to_owned(),
8022,
"FF024DC302130470A1C77839954392BA979F7AE2".to_owned(),
@ -169,22 +225,63 @@ mod tests {
);
let resource_five = make_resource(
"https".to_owned(),
HashMap::from([
("CA".to_owned(), false),
("UK".to_owned(), true),
("SR".to_owned(), false),
("RW".to_owned(), true),
("RU".to_owned(), false),
]),
"234.111.212.100".to_owned(),
10432,
"7B4DE14CB2130470A1C77839954392BA979F7AE2".to_owned(),
1,
);
let resource_six = make_resource(
"https".to_owned(),
HashMap::from([
("CA".to_owned(), false),
("UK".to_owned(), false),
("SR".to_owned(), false),
("RW".to_owned(), false),
("RU".to_owned(), true),
]),
"434.777.212.100".to_owned(),
10112,
"7B4DE04A22130470A1C77839954392BA979F7AE2".to_owned(),
1,
);
let resource_seven = make_resource(
"https".to_owned(),
HashMap::from([
("CA".to_owned(), true),
("UK".to_owned(), false),
("SR".to_owned(), false),
("RW".to_owned(), false),
("RU".to_owned(), true),
]),
"434.777.212.211".to_owned(),
8112,
"01E6FA4A22130470A1C77839954392BA979F7AE2".to_owned(),
5,
);
let mut test_vec: Vec<Resource> = Vec::new();
test_vec.push(resource_one);
test_vec.push(resource_two);
test_vec.push(resource_three);
test_vec.push(resource_four);
test_vec.push(resource_five);
let (functional, failing) = sort_for_parsing(test_vec);
test_vec.push(resource_six);
test_vec.push(resource_seven);
println!("How many in test? {:?}", test_vec.len());
let mut watched_blockages: Vec<String> = Vec::new();
watched_blockages.push("RU".to_string());
let (functional, failing, blocked) = sort_for_parsing(watched_blockages, test_vec);
assert!(
functional.len() == 2,
"There should be 2 functional bridges"
);
assert!(failing.len() == 3, "There should be 3 failing bridges");
assert!(blocked.len() == 2, "There should be 2 blocked bridges");
}
}

View File

@ -1,12 +1,17 @@
[package]
name = "lox-library"
version = "0.1.0"
authors = ["Ian Goldberg <iang@uwaterloo.ca>"]
authors = ["Ian Goldberg <iang@uwaterloo.ca>, Lindsey Tulloch <onyinyang@torporject.org"]
edition = "2018"
rust-version = "1.65.0"
homepage = "https://gitlab.torproject.org/tpo/anti-censorship/lox-project/~/wikis/home"
license = "MIT"
homepage = "https://gitlab.torproject.org/tpo/anti-censorship/lox/-/wikis/home"
description = "Main Lox library with protocols and functions that that make up Lox"
keywords = ["tor", "lox", "bridges"]
keywords = ["tor", "lox", "bridge-distribution","censorship-resistance"]
categories = ["cryptography"]
repository = "https://gitlab.torproject.org/tpo/anti-censorship/lox/-/tree/main/crates/lox-library"
readme = "https://gitlab.torproject.org/tpo/anti-censorship/lox/-/blob/main/crates/lox-library/README.md"
[dependencies]
curve25519-dalek = { version = "4", default-features = false, features = ["serde", "rand_core", "digest"] }
@ -14,7 +19,7 @@ ed25519-dalek = { version = "2", features = ["serde", "rand_core"] }
bincode = "1"
chrono = "0.4"
rand = { version = "0.8", features = ["std_rng"]}
serde = "1.0.190"
serde = "1.0.192"
serde_with = {version = "3.4.0", features = ["json"]}
sha2 = "0.10"
statistical = "1.0.0"

View File

@ -1,8 +1,40 @@
# Lox
Lox is a reputation-based bridge distribution system that provides privacy protection to users and their social graph and is open to all users.
Lox is written in rust and requires `cargo` to test. [Install Rust](https://www.rust-lang.org/tools/install). We used Rust version 1.56.0.
Note that this implementation is coded such that the reachability certificate expires at 00:00 UTC. In reality, if the bucket is still reachable, a user could simply request a new reachability token if their request fails for this reason (a new certificate should be available prior to the outdated certificate expiring).
The protocols in the Lox-library are consistent with the Lox system described
in [Tulloch and Goldberg](https://petsymposium.org/popets/2023/popets-2023-0029.php) (and in greater detail [here](https://uwspace.uwaterloo.ca/handle/10012/18333)). However, this implementation may diverge from the theory over time as the system is deployed and its limitations are better illuminated. The [original version of this library](https://git-crysp.uwaterloo.ca/iang/lox) will remain a more precise implementation of the theory.
Lox is written in rust and requires `cargo` to test. [Install Rust](https://www.rust-lang.org/tools/install). We used Rust version 1.65.0.
## Notable Changes from the original repository
Some changes have been made to integrate the existing Lox protocols with Tor's
bridge distributor [rdsys](https://gitlab.torproject.org/tpo/anti-censorship/rdsys),
but so far, these have not affected the Lox protocols themselves.
These changes are necessary to keep the consistentcy of bridges in buckets that Lox requires while working with the reality of how rdsys/Tor currently receives and distributes information about bridges. The changes to Lox are:
1. Add a `uid_fingerprint` field to the BridgeLine which helps with bridge lookup and
corresponds (roughly) to the unique fingerprint rdsys gives to each bridge
(made up of a hash of the IP and pluggable transport type)
2. Allow for the details of a bridge to be updated. This has been added to
[`crates/lox-library/src/lib.rs`](https://gitlab.torproject.org/tpo/anti-censorship/lox-rs/-/blob/main/crates/lox-library/src/lib.rs) and accounts for the fact that some details
of an existing bridge (i.e., that has a matching fingerprint) may be updated
from time to time.
3. Allow for a bridge to be replaced without penalty. This has also been added to
[`crates/lox-library/src/lib.rs`](https://gitlab.torproject.org/tpo/anti-censorship/lox-rs/-/blob/main/crates/lox-library/src/lib.rs)
and accounts for the fact that Tor currently does not have a robust way of
[knowing that a bridge is blocked](https://gitlab.torproject.org/tpo/anti-censorship/censorship-analysis/-/issues/40035), but does have some tests (namely,
[bridgestrap](https://gitlab.torproject.org/tpo/anti-censorship/bridgestrap) and [onbasca](https://gitlab.torproject.org/tpo/network-health/onbasca)) that help to determine if a
bridge should not be distributed. Since we do not know if the results of
these tests indicate a blocking event, we are allowing for bridges that
rdsys marks as unsuitable for distribution to be updated without penalty in the Lox library.
4. The vectors within `bridge_table.rs` have been refactored into HashMaps that use a unique `u32` for lookup. This has led to a
number of changes around how bridges are inserted/removed from the bridge table but does not impact the overall functionality of the Lox system.
5. The `DupFilter` has been changed from a `HashMap` to a `HashSet`, primarily because this is easier to Serialize/Deserialize when storing the state of the Lox system to recover from failure or to be able to roll back to a previous state.
6. The [`dalek-cryptography`](https://dalek.rs/) libraries have been updated to their most recent versions and the `zkp` library has been forked (until/unless this is fixed in one of the existing upstream repos) to fix a bug that appears when a public attribute is set to 0 (previously impacting only the blockage migration protocol when a user's invitations are set to 0 after migrating). The fork of `zkp` also includes similar updates to `dalek-cryptography` dependencies and some others such as `rand`.
7. Many tests that were used to create the Lox paper/thesis and measure the performance of the system were removed from this repository as they are unnecessary in a deployment scenario. They are still available in the [original repository](https://git-crysp.uwaterloo.ca/iang/lox).
### Other important Notes
As with the original implementation, this implementation is coded such that the reachability certificate expires at 00:00 UTC. Therefore, if an unlucky user requests a reachability certificate just before the 00:00 UTC and tries to use it just after, the request will fail. If the bucket is still reachable, a user can simply request a new reachability token if their request fails for this reason (a new certificate should be available prior to the outdated certificate expiring).

View File

@ -25,7 +25,7 @@ use std::convert::{TryFrom, TryInto};
use subtle::ConstantTimeEq;
/// Each bridge information line is serialized into this many bytes
pub const BRIDGE_BYTES: usize = 300;
pub const BRIDGE_BYTES: usize = 200;
/// The max number of bridges per bucket
pub const MAX_BRIDGES_PER_BUCKET: usize = 3;
@ -240,41 +240,46 @@ struct K {
/// A BridgeTable is the internal structure holding the buckets
/// containing the bridges, the keys used to encrypt the buckets, and
/// the encrypted buckets. The encrypted buckets will be exposed to the
/// the encrypted buckets. The encrypted buckets will be exposed to the
/// users of the system, and each user credential will contain the
/// decryption key for one bucket.
#[serde_as]
#[derive(Debug, Default, Serialize, Deserialize)]
pub struct BridgeTable {
// All structures in the bridgetable are indexed by counter
/// All structures in the bridgetable are indexed by counter
pub counter: u32,
/// The keys of all buckets, indexed by counter, that are still part of the bridge table.
pub keys: HashMap<u32, [u8; 16]>,
/// All buckets, indexed by counter corresponding to the key above, that are
/// part of the bridge table.
pub buckets: HashMap<u32, [BridgeLine; MAX_BRIDGES_PER_BUCKET]>,
pub encbuckets: HashMap<u32, EncryptedBucket>,
/// Individual bridges that are reachable
/// Individual bridges that are reachable.
#[serde_as(as = "HashMap<serde_with::json::JsonString, _>")]
pub reachable: HashMap<BridgeLine, Vec<(u32, usize)>>,
/// bucket ids of "hot spare" buckets. These buckets are not handed
/// Bucket ids of "hot spare" buckets. These buckets are not handed
/// to users, nor do they have any Migration credentials pointing to
/// them. When a new Migration credential is needed, a bucket is
/// them. When a new Migration credential is needed, a bucket is
/// removed from this set and used for that purpose.
pub spares: HashSet<u32>,
/// In some instances a single bridge may need to be added to a bucket
/// In that case, a spare bucket will be removed from the set of spare bridges. One
/// In some instances a single bridge may need to be added to a bucket as a replacement
/// or otherwise. In that case, a spare bucket will be removed from the set of spares, one
/// bridge will be used as the replacement and the left over bridges will be appended to
/// unallocated_bridges.
pub unallocated_bridges: Vec<BridgeLine>,
// To prevent issues with a counter for the hashmap keys, we keep a list of keys that
// no longer match any buckets that can be used before increasing the counter
// To prevent issues with the counter for the hashmap keys, keep a list of keys that
// no longer match any buckets that can be used before increasing the counter.
pub recycleable_keys: Vec<u32>,
// We maintain a list of keys that have been blocked (bucket_id: u32), as well as the
// A list of keys that have been blocked (bucket_id: u32), as well as the
// time (julian_date: u32) of their blocking so that they can be repurposed with new
// buckets after the EXPIRY_DATE
// buckets after the EXPIRY_DATE.
pub blocked_keys: Vec<(u32, u32)>,
// Similarly, we maintain a list of open entry buckets (bucket_id: u32) and the time they were
// created (julian_date: u32) so they will be listed as expired after the EXPIRY_DATE
// Similarly, a list of open entry buckets (bucket_id: u32) and the time they were
// created (julian_date: u32) so they will be listed as expired after the EXPIRY_DATE.
// TODO: add open entry buckets to the open_inv_keys only once they have been distributed
pub open_inv_keys: Vec<(u32, u32)>,
/// The date the buckets were last encrypted to make the encbucket.
/// The encbucket must be rebuilt each day so that the Bucket
/// The encbucket must be rebuilt at least each day so that the Bucket
/// Reachability credentials in the buckets can be refreshed.
pub date_last_enc: u32,
}
@ -288,7 +293,7 @@ impl BridgeTable {
self.buckets.len()
}
/// Append a new bucket to the bridge table, returning its index
/// Insert a new bucket into the bridge table, returning its index
pub fn new_bucket(&mut self, index: u32, bucket: &[BridgeLine; MAX_BRIDGES_PER_BUCKET]) {
// Pick a random key to encrypt this bucket
let mut rng = rand::thread_rng();
@ -311,7 +316,7 @@ impl BridgeTable {
}
/// Create the vector of encrypted buckets from the keys and buckets
/// in the BridgeTable. All of the entries will be (randomly)
/// in the BridgeTable. All of the entries will be (randomly)
/// re-encrypted, so it will be hidden whether any individual bucket
/// has changed (except for entirely new buckets, of course).
/// Bucket Reachability credentials are added to the buckets when

View File

@ -1,8 +1,8 @@
/*! The various credentials used by the system.
In each case, (P,Q) forms the MAC on the credential. This MAC is
In each case, (P,Q) forms the MAC on the credential. This MAC is
verifiable only by the issuing party, or if the issuing party issues a
zero-knowledge proof of its correctness (as it does at issuing time). */
zero-knowledge proof of its correctness (as it does at issuing time).*/
use curve25519_dalek::ristretto::RistrettoPoint;
use curve25519_dalek::scalar::Scalar;
@ -11,7 +11,7 @@ use serde::{Deserialize, Serialize};
/// A migration credential.
///
/// This credential authorizes the holder of the Lox credential with the
/// given id to switch from bucket from_bucket to bucket to_bucket. The
/// given id to switch from bucket from_bucket to bucket to_bucket. The
/// migration_type attribute is 0 for trust upgrade migrations (moving
/// from a 1-bridge untrusted bucket to a 3-bridge trusted bucket) and 1
/// for blockage migrations (moving buckets because the from_bucket has
@ -29,7 +29,7 @@ pub struct Migration {
/// The main user credential in the Lox system.
///
/// Its id is jointly generated by the user and the BA (bridge
/// authority), but known only to the user. The level_since date is the
/// authority), but known only to the user. The level_since date is the
/// Julian date of when this user was changed to the current trust
/// level.
#[derive(Debug, Serialize, Deserialize)]
@ -46,13 +46,13 @@ pub struct Lox {
/// The migration key credential.
///
/// This credential is never actually instantiated. It is an implicit
/// credential on attributes lox_id and from_bucket. This credential
/// type does have an associated private and public key, however. The
/// This credential is never actually instantiated. It is an implicit
/// credential on attributes lox_id and from_bucket. This credential
/// type does have an associated private and public key, however. The
/// idea is that if a user proves (in zero knowledge) that their Lox
/// credential entitles them to migrate from one bucket to another, the
/// BA will issue a (blinded, so the BA will not know the values of the
/// attributes or of Q) MAC on this implicit credential. The Q value
/// attributes or of Q) MAC on this implicit credential. The Q value
/// will then be used (actually, a hash of lox_id, from_bucket, and Q)
/// to encrypt the to_bucket, P, and Q fields of a Migration credential.
/// That way, people entitled to migrate buckets can receive a Migration
@ -70,7 +70,7 @@ pub struct MigrationKey {
///
/// Each day, a credential of this type is put in each bucket that has
/// at least a (configurable) threshold number of bridges that have not
/// been blocked as of the given date. Users can present this
/// been blocked as of the given date. Users can present this
/// credential (in zero knowledge) with today's date to prove that the
/// bridges in their bucket have not been blocked, in order to gain a
/// trust level.
@ -86,7 +86,7 @@ pub struct BucketReachability {
///
/// These credentials allow a Lox user (the inviter) of sufficient trust
/// (level 2 or higher) to invite someone else (the invitee) to join the
/// system. The invitee ends up at trust level 1, in the _same bucket_
/// system. The invitee ends up at trust level 1, in the _same bucket_
/// as the inviter, and inherits the inviter's blockages count (so that
/// you can't clear your blockages count simply by inviting yourself).
/// Invitations expire after some amount of time.

View File

@ -10,7 +10,7 @@ use std::hash::Hash;
use serde::{Deserialize, Serialize};
/// Each instance of DupFilter maintains its own independent table of
/// seen ids. IdType will typically be Scalar.
/// seen ids. IdType will typically be Scalar.
#[derive(Default, Debug, Serialize, Deserialize)]
pub struct DupFilter<IdType: Hash + Eq + Copy + Serialize> {
seen_table: HashSet<IdType>,

View File

@ -8,7 +8,7 @@ Keyed-Verification Anonymous Credentials" (Chase, Meiklejohn, and
Zaverucha, CCS 2014)
The notation follows that of the paper "Hyphae: Social Secret Sharing"
(Lovecruft and de Valence, 2017), Section 4. */
(Lovecruft and de Valence, 2017), Section 4. */
// We really want points to be capital letters and scalars to be
// lowercase letters
@ -62,10 +62,12 @@ lazy_static! {
}
// EXPIRY_DATE is set to EXPIRY_DATE days for open-entry and blocked buckets in order to match
// the expiry date for Lox credentials. This particular value (EXPIRY_DATE) is chosen because
// the expiry date for Lox credentials.This particular value (EXPIRY_DATE) is chosen because
// values that are 2^k 1 make range proofs more efficient, but this can be changed to any value
pub const EXPIRY_DATE: u32 = 511;
/// ReplaceSuccess sends a signal to the lox-distributor to inform
/// whether or not a bridge was successfully replaced
#[derive(PartialEq, Eq)]
pub enum ReplaceSuccess {
NotFound = 0,
@ -73,18 +75,23 @@ pub enum ReplaceSuccess {
Replaced = 2,
}
/// This error is thrown if the number of buckets/keys in the bridge table
/// exceeds u32 MAX.It is unlikely this error will ever occur.
#[derive(Error, Debug)]
pub enum NoAvailableIDError {
#[error("Find key exhausted with no available index found!")]
ExhaustedIndexer,
}
/// This error is thrown after the MAX_DAILY_BRIDGES threshold for bridges
/// distributed in a day has been reached
#[derive(Error, Debug)]
pub enum ExceededMaxBridgesError {
#[error("The maximum number of bridges has already been distributed today, please try again tomorrow!")]
ExceededMaxBridges,
}
/// Private Key of the Issuer
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct IssuerPrivKey {
x0tilde: Scalar,
@ -106,11 +113,13 @@ impl IssuerPrivKey {
}
}
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct IssuerPubKey {
X: Vec<RistrettoPoint>,
}
/// Public Key of the Issuer
impl IssuerPubKey {
/// Create an IssuerPubKey from the corresponding IssuerPrivKey
pub fn new(privkey: &IssuerPrivKey) -> IssuerPubKey {
@ -130,11 +139,11 @@ impl IssuerPubKey {
}
}
// Number of times a given invitation is ditributed
/// Number of times a given invitation is ditributed
pub const OPENINV_K: u32 = 10;
// TODO: Decide on maximum daily number of invitations to be distributed
/// TODO: Decide on maximum daily number of invitations to be distributed
pub const MAX_DAILY_BRIDGES: u32 = 100;
/// The BridgeDb. This will typically be a singleton object. The
/// The BridgeDb. This will typically be a singleton object. The
/// BridgeDb's role is simply to issue signed "open invitations" to
/// people who are not yet part of the system.
#[derive(Debug, Serialize, Deserialize)]
@ -188,6 +197,8 @@ impl BridgeDb {
self.openinv_buckets.remove(bucket);
}
/// Remove open invitation and/or otherwise distributed buckets that have
/// become blocked or are expired to free up the index for a new bucket
pub fn remove_blocked_or_expired_buckets(&mut self, bucket: &u32) {
if self.openinv_buckets.contains(bucket) {
println!("Removing a bucket that has not been distributed yet!");
@ -197,6 +208,7 @@ impl BridgeDb {
}
}
/// Mark a bucket as distributed
pub fn mark_distributed(&mut self, bucket: u32) {
self.distributed_buckets.push(bucket);
}
@ -239,8 +251,8 @@ impl BridgeDb {
}
}
/// Verify an open invitation. Returns the invitation id and the
/// bucket number if the signature checked out. It is up to the
/// Verify an open invitation. Returns the invitation id and the
/// bucket number if the signature checked out. It is up to the
/// caller to then check that the invitation id has not been used
/// before.
pub fn verify(
@ -250,7 +262,7 @@ impl BridgeDb {
// Pull out the signature and verify it
let sig = Signature::try_from(&invitation[(32 + 4)..])?;
pubkey.verify(&invitation[0..(32 + 4)], &sig)?;
// The signature passed. Pull out the bucket number and then
// The signature passed. Pull out the bucket number and then
// the invitation id
let bucket = u32::from_le_bytes(invitation[32..(32 + 4)].try_into().unwrap());
let s = Scalar::from_canonical_bytes(invitation[0..32].try_into().unwrap());
@ -270,7 +282,7 @@ impl Default for BridgeDb {
}
}
/// The bridge authority. This will typically be a singleton object.
/// The bridge authority. This will typically be a singleton object.
#[derive(Debug, Serialize, Deserialize)]
pub struct BridgeAuth {
/// The private key for the main Lox credential
@ -363,7 +375,7 @@ impl BridgeAuth {
/// Insert a set of open invitation bridges.
///
/// Each of the bridges will be given its own open invitation
/// bucket, and the BridgeDb will be informed. A single bucket
/// bucket, and the BridgeDb will be informed. A single bucket
/// containing all of the bridges will also be created, with a trust
/// upgrade migration from each of the single-bridge buckets.
pub fn add_openinv_bridges(
@ -406,6 +418,9 @@ impl BridgeAuth {
Ok(())
}
/// When syncing the Lox bridge table with rdsys, this function returns any bridges
/// that are found in the Lox bridge table that are not found in the Vector
/// of bridges received from rdsys through the Lox distributor.
pub fn find_and_remove_unaccounted_for_bridges(
&mut self,
accounted_for_bridges: Vec<u64>,
@ -419,6 +434,7 @@ impl BridgeAuth {
unaccounted_for
}
/// Allocate single left over bridges to an open invitation bucket
pub fn allocate_bridges(
&mut self,
distributor_bridges: &mut Vec<BridgeLine>,
@ -447,12 +463,10 @@ impl BridgeAuth {
// Update the details of a bridge in the bridge table. This assumes that the IP and Port
// of a given bridge remains the same and thus can be updated.
// First we must retrieve the list of reachable bridges, then we must search for any matching our partial key
// which will include the IP and Port. Then we can replace the original bridge with the updated bridge
// which will include the IP and Port. Finally we can replace the original bridge with the updated bridge.
// Returns true if the bridge has successfully updated
pub fn bridge_update(&mut self, bridge: &BridgeLine) -> bool {
let mut res: bool = false; //default False to assume that update failed
//Needs to be updated since bridge will only match on some fields.
let reachable_bridges = self.bridge_table.reachable.clone();
for reachable_bridge in reachable_bridges {
if reachable_bridge.0.uid_fingerprint == bridge.uid_fingerprint {
@ -490,6 +504,8 @@ impl BridgeAuth {
res
}
/// Attempt to remove a bridge that is failing tests and replace it with a bridge from
/// available_bridge or from a spare bucket
pub fn bridge_replace(
&mut self,
bridge: &BridgeLine,
@ -588,21 +604,21 @@ impl BridgeAuth {
res
}
/// Mark a bridge as unreachable
/// Mark a bridge as blocked
///
/// This bridge will be removed from each of the buckets that
/// contains it. If any of those are open-invitation buckets, the
/// contains it. If any of those are open-invitation buckets, the
/// trust upgrade migration for that bucket will be removed and the
/// BridgeDb will be informed to stop handing out that bridge. If
/// BridgeDb will be informed to stop handing out that bridge. If
/// any of those are trusted buckets where the number of reachable
/// bridges has fallen below the threshold, a blockage migration
/// from that bucket to a spare bucket will be added, and the spare
/// bucket will be removed from the list of hot spares. In
/// bucket will be removed from the list of hot spares. In
/// addition, if the blocked bucket was the _target_ of a blockage
/// migration, change the target to the new (formerly spare) bucket.
/// Returns true if sucessful, or false if it needed a hot spare but
/// there was none available.
pub fn bridge_unreachable(&mut self, bridge: &BridgeLine, bdb: &mut BridgeDb) -> bool {
pub fn bridge_blocked(&mut self, bridge: &BridgeLine, bdb: &mut BridgeDb) -> bool {
let mut res: bool = true;
if self.bridge_table.unallocated_bridges.contains(bridge) {
let index = self
@ -647,9 +663,9 @@ impl BridgeAuth {
continue;
}
// This bucket is now unreachable. Get a spare bucket
// This bucket is now unreachable. Get a spare bucket
if self.bridge_table.spares.is_empty() {
// Uh, oh. No spares available. Just delete any
// Uh, oh. No spares available. Just delete any
// migrations leading to this bucket.
res = false;
self.trustup_migration_table
@ -692,7 +708,7 @@ impl BridgeAuth {
}
// Since buckets are moved around in the bridge_table, finding a lookup key that
// does not overwrite existing bridges could become an issue. We keep a list
// does not overwrite existing bridges could become an issue.We keep a list
// of recycleable lookup keys from buckets that have been removed and prioritize
// this list before increasing the counter
fn find_next_available_key(&mut self, bdb: &mut BridgeDb) -> Result<u32, NoAvailableIDError> {
@ -728,6 +744,7 @@ impl BridgeAuth {
self.clean_up_open_entry(bdb);
}
/// Cleans up exipred blocked buckets
fn clean_up_blocked(&mut self) {
if !self.bridge_table.blocked_keys.is_empty()
&& self
@ -754,7 +771,7 @@ impl BridgeAuth {
if bridgeline.port > 0 {
// Move to unallocated bridges
self.bridge_table.unallocated_bridges.push(*bridgeline);
// Check if it's still in the reachable bridges. It should be if we've gotten this far.
// Check if it's still in the reachable bridges.It should be if we've gotten this far.
if let Some(_reachable_indexes_for_bridgeline) =
self.bridge_table.reachable.get(bridgeline)
{
@ -780,6 +797,7 @@ impl BridgeAuth {
}
}
/// Cleans up expired open invitation buckets
fn clean_up_open_entry(&mut self, bdb: &mut BridgeDb) {
// First check if there are any open invitation indexes that are old enough to be replaced
if !self.bridge_table.open_inv_keys.is_empty()
@ -825,14 +843,14 @@ impl BridgeAuth {
self.time_offset += time::Duration::days(1);
}
//#[cfg(test)]
///#[cfg(test)]
/// For testing only: manually advance the day by the given number
/// of days
pub fn advance_days(&mut self, days: u16) {
self.time_offset += time::Duration::days(days.into());
}
/// Get today's (real or simulated) date
/// Get today's (real or simulated) date as u32
pub fn today(&self) -> u32 {
// We will not encounter negative Julian dates (~6700 years ago)
// or ones larger than 32 bits
@ -842,7 +860,7 @@ impl BridgeAuth {
.unwrap()
}
/// Get today's (real or simulated) date
/// Get today's (real or simulated) date as a DateTime<Utc> value
pub fn today_date(&self) -> DateTime<Utc> {
Utc::now()
}
@ -961,13 +979,13 @@ pub fn pt_dbl(P: &RistrettoPoint) -> RistrettoPoint {
/// The protocol modules.
///
/// Each protocol lives in a submodule. Each submodule defines structs
/// Each protocol lives in a submodule. Each submodule defines structs
/// for Request (the message from the client to the bridge authority),
/// State (the state held by the client while waiting for the reply),
/// and Response (the message from the bridge authority to the client).
/// Each submodule defines functions request, which produces a (Request,
/// State) pair, and handle_response, which consumes a State and a
/// Response. It also adds a handle_* function to the BridgeAuth struct
/// Response. It also adds a handle_* function to the BridgeAuth struct
/// that consumes a Request and produces a Result<Response, ProofError>.
pub mod proto {
pub mod blockage_migration;

View File

@ -888,7 +888,7 @@ fn block_bridges(th: &mut TestHarness, to_block: usize) {
let ba_clone = th.ba.bridge_table.buckets.clone();
if let Some(bridgelines) = ba_clone.get(&u32::try_from(index).unwrap()) {
for bridgeline in bridgelines {
th.ba.bridge_unreachable(bridgeline, &mut th.bdb);
th.ba.bridge_blocked(bridgeline, &mut th.bdb);
}
}
}
@ -1229,7 +1229,7 @@ fn test_mark_unreachable() {
// Mark a bridge in an untrusted bucket as unreachable
let bucket6 = th.ba.bridge_table.buckets.get(&6u32).unwrap();
let b6 = bucket6[0];
th.ba.bridge_unreachable(&b6, &mut th.bdb);
th.ba.bridge_blocked(&b6, &mut th.bdb);
println!("spares = {:?}", th.ba.bridge_table.spares);
println!("tmig = {:?}", th.ba.trustup_migration_table.table);
@ -1240,7 +1240,7 @@ fn test_mark_unreachable() {
// unreachable
let bucket7 = th.ba.bridge_table.buckets.get(&7u32).unwrap();
let b7 = bucket7[0];
th.ba.bridge_unreachable(&b7, &mut th.bdb);
th.ba.bridge_blocked(&b7, &mut th.bdb);
println!("spares = {:?}", th.ba.bridge_table.spares);
println!("tmig = {:?}", th.ba.trustup_migration_table.table);
@ -1262,8 +1262,8 @@ fn test_mark_unreachable() {
let bt1 = bucket1[1];
let bucket2 = th.ba.bridge_table.buckets.get(&target).unwrap();
let bt2 = bucket2[2];
th.ba.bridge_unreachable(&bt1, &mut th.bdb);
th.ba.bridge_unreachable(&bt2, &mut th.bdb);
th.ba.bridge_blocked(&bt1, &mut th.bdb);
th.ba.bridge_blocked(&bt2, &mut th.bdb);
println!("spares = {:?}", th.ba.bridge_table.spares);
println!("tmig = {:?}", th.ba.trustup_migration_table.table);
@ -1313,8 +1313,8 @@ fn test_blockage_migration() {
assert!(bucket.1.is_some());
// Oh, no! Two of our bridges are blocked!
th.ba.bridge_unreachable(&bucket.0[0], &mut th.bdb);
th.ba.bridge_unreachable(&bucket.0[2], &mut th.bdb);
th.ba.bridge_blocked(&bucket.0[0], &mut th.bdb);
th.ba.bridge_blocked(&bucket.0[2], &mut th.bdb);
println!("spares = {:?}", th.ba.bridge_table.spares);
println!("tmig = {:?}", th.ba.trustup_migration_table.table);

View File

@ -7,7 +7,6 @@ rust-version = "1.65"
homepage = "https://gitlab.torproject.org/tpo/anti-censorship/lox/-/wikis/home"
description = "General helpers used by Lox"
keywords = ["tor", "lox"]
# We must put *something* here and this will do
categories = ["rust-patterns"]
repository = "https://gitlab.torproject.org/tpo/anti-censorship/lox.git/"

View File

@ -1,10 +1,15 @@
[package]
name = "lox-wasm"
authors = ["Cecylia Bocovich <cohosh@torproject.org>"]
authors = ["Cecylia Bocovich <cohosh@torproject.org>, Lindsey Tulloch <onyinyang@torproject.org"]
version = "0.1.0"
edition = "2021"
description = "WASM bindings for lox"
license = "MIT"
homepage = "https://gitlab.torproject.org/tpo/anti-censorship/lox/-/wikis/home"
keywords = ["tor", "lox", "bridges","censorship-resistance"]
categories = ["wasm", "web-programming::http-client","external-ffi-bindings"]
repository = "https://gitlab.torproject.org/tpo/anti-censorship/lox/-/tree/main/crates/lox-wasm"
readme = "https://gitlab.torproject.org/tpo/anti-censorship/lox/-/blob/main/crates/lox-wasm/README.md"
[lib]
crate-type = ["cdylib"]

View File

@ -16,6 +16,9 @@ wasm-pack build --target web
# Testing
### Testing Locally
The provided `index.html` file can be used for testing the lox bindings. First, follow the instructions to [run a lox server](https://gitlab.torproject.org/cohosh/lox-server).
Then, spin up a simple local webserver in the current directory:

View File

@ -1,10 +1,15 @@
[package]
name = "rdsys_backend"
authors = ["Cecylia Bocovich <cohosh@torproject.org>"]
authors = ["Cecylia Bocovich <cohosh@torproject.org>, Lindsey Tulloch <onyinyang@torproject.org"]
version = "0.2.0"
edition = "2021"
license = "MIT"
homepage = "https://gitlab.torproject.org/tpo/anti-censorship/lox/-/wikis/home"
keywords = ["tor", "lox", "bridges","censorship-resistance"]
categories = ["api-bindings", "encoding"]
repository = "https://gitlab.torproject.org/tpo/anti-censorship/lox/-/tree/main/crates/rdsys-backend-api"
readme = "https://gitlab.torproject.org/tpo/anti-censorship/lox/-/blob/main/crates/rdsys-backend-api/README.md"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
serde_json = "1"