Jetty server high CPU when client send data length > 17408
This affects SSL connections only, which is not part of our default setup.
Adapted from workaround at:
https://github.com/eclipse/jetty.project/security/advisories/GHSA-26vr-8j45-3r4w
Put the new checks directly in the unwrap() method,
rather than subclassing SslConnection, as that would require config file changes.
Reverse cache wasn't regenerated at midnight,
so decryption would fail after the first routing key change.
We had the rollover() method but it wasn't called.
WIP
Offer intro key for IPv6
Pick introducers for IPv6
Publish address with IPv6 introducers
Reduce churn of selected introducers
Only adjust transport bid if they publish C cap
Log tweaks
Part 1:
Change bandwidth estimate to exponential moving average
(Similar to Westwood+ Simple Bandwidth Estimator in streaming)
instead of 40 ms bucket.
Also use it for tunnel.participatingBandwidthOut stat.
Remove linear moving average code previously used for stat
Reduce RED threshold from 120% to 95% of limit
Part 2:
Fix the other part of RED which is the dropping calculation.
Previously, it simply used the bandwidth to start dropping if
it was higher than a threshold. The drop percentage rose from
0 to 100%, linearly, based on how far the bandwidth was
above the threshold. This was far, far from the RED paper.
Now, we follow the RED paper (see ref. in SyntheticREDQueue javadoc)
to calculate an average queue size, using the exact same
exponential moving average method used for bandwidth.
Similar to CoDel, it also includes a count of how long
the size is over the threshold, and increases the drop probability with the count.
The unadjusted drop probability rises from 0 to 2%
and then everything is dropped, as in the RED paper.
The low and high thresholds are configured at 77 ms and 333 ms of queued data, respectively.
The queue is "synthetic" in that there's not actually a queue.
It only calculates how big the queue would be if it were
a real queue and were being emptied at exactly the target rate.
The actual queueing is done downstream in the transports and in UDP-Sender.
The goals are, for an 80% default share, to do most of the
part. traffic dropping here in RED, not downstream in UDP-Sender,
while fully utilizing the configured share bandwidth.
If the router goes into high message delay mode, that means we are not dropping enough in RED.
Above 80% share this probably doesn't work as well.
There may be more tuning required, in particular to achieve the goal of "protecting" the UDP-Sender
queue and local client/router traffic by dropping more aggressively in RED.
This patch also improves the overhead estimate for outbound part. tunnel traffic at the OBEP.
Reviewed, tested, acked by zlatinb
Increase min and max queue size
Tweak stats
Util: Allow creation of CoDel queues with non-default parameters
New params are tentative, may be adjusted later
Peer test tries only one peer every 5 minutes, and uses a store of
the peer's own router info, which is not helpful.
Peer test records its result as a dbLookup success/failure,
but for the peers that are not floodfill, this stat is useless.
The tunnel test is disabled by default except for hidden mode.
The tunnel test response time stat uses a large amount of memory (5 rates)
and has apparently never been used since it was added in 2004.
There's also a separate tunnel test time average variable, separate
from the stat, that is also unused.
Each is disabled with a separate hardcoded config,
pending evaluation of whether and how to make the tests useful
and where to effectively use the result for peer selection.
List updated using the Freedom in the World Index 2020
Force hidden mode routers to LU
Don't publish stats in first hour of uptime
Add H.323 to invalid ports list
Improve crashed message in event log
This partially fixes the issue of packets not being retransmitted
before they expire in 10 seconds, introduced in 0.9.48 as reported by
jogger at http://zzz.i2p/topics/3003
Fast retransmit was also suggested by jogger as a solution and discussed in that thread.
This code is based on the requirements for TCP fast retransmit
as specified in RFC 5681 but cannot precisely follow the RFC
as UDP messages can be dropped without affecting later messages:
- nack counter is per-message, not per-connection
- some interactions with the retransmit timer when in fast retx mode
- msg expiration is currently 10s but max RTO is 60s
- interactions with individual fragment transmission implemented in 0.9.48-5
- this is a sender-side fix but it depends on far-end ack resend strategy
Maintain a local message sequence number and store
it in OMF, previously unused as codel is disabled
Removed acked messages from _outboundMessages as usual,
but stores message and seq. numbers in a LinkedHashMap,
so we may interpret additional acks as nacks.
Calculate the highest-acked seq. number for every incoming packet.
Marks messages older than highest acked as nacked
Fast-retransmits after 3 nacks
Window and SST adjustments per RFC 5681 sec. 2.4
Reduce resend ack quantity and timeout to improve odds of receiving "nacks"
Disable wakeup of OMF from IMF; should not be needed now that PS calls nudge()
PS.acked(partial) now returns true if any fragment was acked, not if complete
Log tweaks
Still todo: possible additional changes to ack resend strategy;
possible parameter adjustments including msg expiration;
confirm that OMF wakeup in IMF is not required;
further testing and cleanups;
take additional ideas from alternative proposal in MR !8;
stat tweaks;
find related tickets to close
Reviewed by and contains code from zlatinb in MR !8
This builds on several previous SSU improvements; see #2427 for a list.
ref: gitlab MRs !8!9!10!11
Always request new LS for subsessions also
Don't reuse LS object for subsessions
Cancel rerequest timer as necessary
Fixes watchdog warnings
Fixes console status for subsessions in different states
javadocs
if all fragments will not fit in the window.
Track per-fragment send count.
Reset send window when retransmitting.
Update send window when partial acks received.
Make OMS.getMaxSends() and getPushCount() track different things.
Change OMS.push() to be called by OMF and return the pushed fragments.
Use size of smallest fragment rather than total size to determine if we can send a message now.
This is an improved fix for ticket #2505.
Eliminate repeated calls to OMS.getLifetime()
Log tweaks and reduce log levels
Improves throughput on lossy connections.
Reduces latency for large messages.
This is prep for reducing DEFAULT_SEND_WINDOW_BYTES and W+, which
would have exacerbated these issues.
Additional changes to follow, implementing Westwood+, see #2427
- Change router ECIES SKM to use N pattern, remove Elligator2, to match proposal changes
- Allow encrypted messages to ECIES routers
- Allow ECIES routers to become floodfill
- Add XDH factory to VM comm system for tests
modelled on TCP Reno (RFCs 5681 and 6298)
- Use a single timer per connection
- Resend up to half the un-acked messages per timer event instead of a single message
- Only send either old or new messages, do not mix
- Cache/avoid several timer calls
- Instead of 3 return values, allocating bandwidth is now a boolean function
- Avoid one of the iterations over all un-acked messages every packet pusher loop
- Remove 100 ms failsafe
- Fix OMF debug log NPE
With the same cpu usage the bandwidth is much higher
Significant speed improvement for lossy connections (e.g. wifi)
Patch by zlatinb
Support sending encrypted lookups and stores to ECIES routers
Support requesting AEAD replies to ECIES routers
Encrypt RI lookups when using ECIES even on slow machines
Switch back to RatchetSKM
Don't schedule ack timer for router SKM
Reduce getContext() calls
GMB null check cleanup
MessageWrapper javadoc clarifications
Log tweaks
Preliminary, proposal not finalized, subject to change
Not yet compatibility tested with other implementations
Add peers to match requested length for explicitPeers
remove commented out code
log tweaks
Add support for zen and zen2
Enable more fallbacks for zen and zen2
Adds Zen and Zen2 binaries, stripped
Built with gcc 9.3.0
Other binaries will be added if testing shows improvement
Fix hangs in mbuild-all.sh build script
Add silvermont and goldmont to build script, untested, support TBD
GMP is GPLv2
More info: http://zzz.i2p/topics/2955