Compare commits

...

143 Commits

Author SHA1 Message Date
034803add7 * 2006-07-28 0.6.1.24 released 2006-07-29 18:03:14 +00:00
b25bb053bb 2006-07-28 jrandom
* Don't try to reverify too many netDb entries at once (thanks
      cervantes and Complication!)
2006-07-29 04:41:15 +00:00
9bd0c79441 2006-07-28 jrandom
* Actually fix the threading deadlock issue in the netDb (removing
      the synchronized access to individual kbuckets while validating
      individual entries) (thanks cervantes, postman, frosk, et al!)
2006-07-29 01:11:50 +00:00
06b8670410 * 2006-07-27 0.6.1.23 released 2006-07-28 03:34:59 +00:00
6577ae499f 2006-07-27 jrandom
* Cut down NTCP connection establishments once we know the peer is skewed
      (rather than wait for full establishment before verifying)
    * Removed a lock on the stats framework when accessing rates, which
      shouldn't be a problem, assuming rates are created (pretty much) all at
      once and merely updated during the lifetime of the jvm.
2006-07-27 23:40:00 +00:00
54bc5485ec oops, thanks bar! 2006-07-27 07:06:55 +00:00
84b741ac98 2006-07-27 jrandom
* Further NTCP write status cleanup
    * Handle more oddly-timed NTCP disconnections (thanks bar!)
2006-07-27 06:20:25 +00:00
c48c419d74 quick prng workaround 2006-07-27 01:34:31 +00:00
fb2e795add 2006-07-26 jrandom
* When dropping a netDb router reference, only accept newer
      references as part of the update check
    * If we have been up for a while, don't accept really old
      router references (published 2 or more days ago)
    * Drop router references once they are no longer valid, even if
      they were allowed in due to the lax restrictions on startup
2006-07-27 01:04:59 +00:00
ec215777ec 2006-07-26 jrandom
* When dropping a netDb router reference, only accept newer
      references as part of the update check
    * If we have been up for a while, don't accept really old
      router references (published 2 or more days ago)
    * Drop router references once they are no longer valid, even if
      they were allowed in due to the lax restrictions on startup
2006-07-27 00:56:49 +00:00
d4e0f27c56 2006-07-26 jrandom
* Every time we create a new router identity, add an entry to the
      new "identlog.txt" text file in the I2P install directory.  For
      debugging purposes, publish the count of how many identities the
      router has cycled through, though not the identities itself.
    * Cleaned up the way the multitransport shitlisting worked, and
      added per-transport shitlists
    * When dropping a router reference locally, first fire a netDb
      lookup for the entry
    * Take the peer selection filters into account when organizing the
      profiles (thanks Complication!)
    * Avoid some obvious configuration errors for the NTCP transport
      (invalid ports, "null" ip, etc)
    * Deal with some small NTCP bugs found in the wild (unresolveable
      hosts, strange network discons, etc)
    * Send our netDb info to peers we have direct NTCP connections to
      after each 6-12 hours of connection uptime
    * Clean up the NTCP reading and writing queue logic to avoid some
      potential delays
    * Allow people to specify the IP that the SSU transport binds on
      locally, via the advanced config "i2np.udp.bindInterface=1.2.3.4"
2006-07-26 06:36:18 +00:00
e1c686baa6 2006-07-18 Complication
* URL and date fix in news.xml
2006-07-18 23:35:54 +00:00
d57af1aef4 remove 1.5ism 2006-07-18 20:20:08 +00:00
a52dd57215 * 2006-07-18 0.6.1.22 released
2006-07-18  jrandom
    * Add a failsafe to the NTCP transport to make sure we keep
      pumping writes when we should.
    * Properly reallow 16-32KBps routers in the default config
      (thanks Complication!)
2006-07-18 20:08:00 +00:00
65138357d3 2006-07-16 Complication
* Collect tunnel build agree/reject/expire statistics
      for each bandwidth tier of peers (and peers of unknown tiers,
      even if those shouldn't exist)
2006-07-16 17:20:46 +00:00
f6320696dd 2006-07-14 jrandom
* Improve the multitransport shitlisting (thanks Complication!)
    * Allow routers with a capacity of 16-32KBps to be used in tunnels under
      the default configuration (thanks for the stats Complication!)
    * Properly allow older router references to load on startup
      (thanks bar, Complication, et al!)
    * Add a new "i2p.alwaysAllowReseed" advanced config property, though
      hopefully today's changes should make this unnecessary (thanks void!)
    * Improved NTCP buffering
    * Close NTCP connections if we are too backlogged when writing to them
2006-07-14 18:08:44 +00:00
900d8a2026 oops, test method. thanks cervantes 2006-07-07 19:04:24 +00:00
ccc9a87e8c unnecessary 2006-07-07 18:59:16 +00:00
208634e5de 2006-07-04 jrandom
* New NIO-based tcp transport (NTCP), enabled by default for outbound
      connections only.  Those who configure their NAT/firewall to allow
      inbound connections and specify the external host and port
      (dyndns/etc is ok) on /config.jsp can receive inbound connections.
      SSU is still enabled for use by default for all users as a fallback.
    * Substantial bugfix to the tunnel gateway processing to transfer
      messages sequentially instead of interleaved
    * Renamed GNU/crypto classes to avoid name clashes with kaffe and other
      GNU/Classpath based JVMs
    * Adjust the Fortuna PRNG's pooling system to reduce contention on
      refill with a background thread to refill the output buffer
    * Add per-transport support for the shitlist
    * Add a new async pumped tunnel gateway to reduce tunnel dispatcher
      contention
2006-07-04 21:17:44 +00:00
3d07205c9d 2006-07-01 Complication
* Ensure that the I2PTunnel web interface won't update tunnel settings
      for shared clients when a non-shared client is modified
      (thanks for spotting, BarkerJr!)
2006-07-01 22:44:34 +00:00
zzz
f0a424a93f (zzz) .21, 6-13 mtg 2006-06-15 22:15:09 +00:00
f9b59ee07d 2006-06-14 cervantes
* Small tweak to I2PTunnel CSS, so it looks better with desktops
      that use Bitstream Vera fonts @ 96 dpi
2006-06-14 05:24:34 +00:00
b92b9d2618 * 2006-06-14 0.6.1.21 released 2006-06-14 02:17:40 +00:00
a3db9429a7 2006-06-13 jrandom
* Use a minimum uptime of 2 hours, not 4 (oops)
2006-06-13 23:29:51 +00:00
291a5c9578 2006-06-13 jrandom
* Cut down the proactive rejections due to queue size - if we are
      at the point of having decrypted the request off the queue, might
      as well let it through, rather than waste that decryption
2006-06-13 09:38:48 +00:00
0a3281c279 2006-06-11 Kloug
* Bugfix to the I2PTunnel IRC filter to support multiple concurrent
      outstanding pings/pongs
2006-06-11 19:48:37 +00:00
23f30ba576 2006-06-10 jrandom
* Further reduction in proactive rejections
2006-06-10 20:14:57 +00:00
f3de85c4de 2006-06-09 jrandom
* Don't let the pending tunnel request queue grow beyond reason
      (letting things sit for up to 30s when they fail after 10s
      seems a bit... off)
2006-06-10 00:34:44 +00:00
a3a4888e0b 2006-06-08 jrandom
* Be more conservative in the proactive rejections
2006-06-09 01:02:40 +00:00
6fd7881f8e thanks bar 2006-06-05 06:41:11 +00:00
381f716769 2006-06-04 Complication
* Stop sending a blank line before USER in susimail.
      Seemed to break in rare cases, thanks for reporting, Brachtus!
2006-06-05 01:33:03 +00:00
f2078e1523 * 2006-06-04 0.6.1.20 released
2006-06-04  jrandom
    * Reduce the SSU ack frequency
    * Tweaked the tunnel rejection settings to reject less aggressively
2006-06-04 22:25:08 +00:00
f2fb87c88b 2006-05-31 jrandom
* Only send netDb searches to the floodfill peers for the time being
    * Add some proof of concept filters for tunnel participation.  By default,
      it will skip peers with an advertised bandwith of less than 32KBps or
      an advertised uptime of less than 2 hours.  If this is sufficient, a
      safer implementation of these filters will be implemented.
2006-05-31 23:23:37 +00:00
fcbea19478 2006-05-30 Complication
* weekly news.xml update
2006-05-31 03:19:24 +00:00
92f25bd4fa 2006-05-18 Complication
* news.xml update
2006-05-19 00:17:10 +00:00
85c2c11217 * 2006-05-18 0.6.1.19 released
2006-05-18  jrandom
    * Made the SSU ACKs less frequent when possible
2006-05-18 22:31:06 +00:00
de1ca4aea4 2006-05-17 Complication
* Fix some oversights in my previous changes:
      adjust some loglevels, make a few statements less wasteful,
      make one comparison less confusing and more likely to log unexpected values
2006-05-18 03:42:55 +00:00
a0f865fb99 2006-05-17 jrandom
* Make the peer page sortable
    * SSU modifications to cut down on unnecessary connection failures
2006-05-18 03:00:48 +00:00
2c3fea5605 2006-05-16 jrandom
* Further shitlist randomizations
    * Adjust the stats monitored for detecting cpu overload when dropping new
      tunnel requests
2006-05-16 18:34:08 +00:00
ba1d88b5c9 2006-05-15 jrandom
* Add a load dependent throttle on the pending inbound tunnel request
      backlog
    * Increased the tunnel test failure slack before killing a tunnel
2006-05-15 17:33:02 +00:00
2ad715c668 2006-05-13 Complication
* Update the build number too
2006-05-14 05:11:36 +00:00
5f17557e54 2006-05-13 Complication
* Separate growth factors for tunnel count and tunnel test time
    * Reduce growth factors, so probabalistic throttle would activate
    * Square probAccept values to decelerate stronger when far from average
    * Create a bandwidth stat with approximately 15-second half life
    * Make allowTunnel() check the 1-second bandwidth for overload
      before doing allowance calculations using 15-second bandwidth
    * Tweak the overload detector in BuildExecutor to be more sensitive
      for rising edges, add ability to initiate tunnel drops
    * Add a function to seek and drop the highest-rate participating tunnel,
      keeping a fixed+random grace period between such drops.
      It doesn't seem very effective, so disabled by default
      ("router.dropTunnelsOnOverload=true" to enable)
2006-05-14 04:52:44 +00:00
2ad5a6f907 2006-05-11 jrandom
* PRNG bugfix (thanks cervantes and Complication!)
2006-05-12 03:31:44 +00:00
0920462060 2006-05-09 Complication
* weekly news.xml update
2006-05-10 02:38:42 +00:00
870e94e184 * 2006-05-09 0.6.1.18 released
2006-05-09  jrandom
    * Further tunnel creation timeout revamp
2006-05-09 21:17:17 +00:00
6b0d507644 2006-05-07 Complication
* Fix problem whereby repeated calls to allowed() would make
      the 1-tunnel exception permit more than one concurrent build
2006-05-08 03:19:46 +00:00
70cf9e4ca7 2006-05-06 jrandom
* Readjust the tunnel creation timeouts to reject less but fail earlier,
      while tracking the extended timeout events.
2006-05-06 20:27:34 +00:00
2a3974c71d 2006-05-04 jrandom
* Short circuit a highly congested part of the stat logging unless its
      required (may or may not help with a synchronization issue reported by
      andreas)
2006-05-04 23:08:48 +00:00
46ac9292e8 2006-05-03 Complication
* Allow a single build attempt to proceed despite 1-minute overload
      only if the 1-second rate shows enough spare bandwidth
      (e.g. overload has already eased)
2006-05-03 11:13:26 +00:00
4307097472 2006-05-02 Complication
* Correct a misnamed property in SummaryHelper.java
      to avoid confusion
    * Make the maximum allowance of our own concurrent
      tunnel builds slightly adaptive: one concurrent build per 6 KB/s
      within the fixed range 2..10
    * While overloaded, try to avoid completely choking our own build attempts,
      instead prefer limiting them to 1
2006-05-03 04:30:26 +00:00
ed3fdaf4f1 2006-05-02 Complication
* Fixed URL in previous update, sorry
2006-05-03 02:11:06 +00:00
378a9a8f5c 2006-05-02 Complication
* Weekly news.xml update
2006-05-03 02:03:01 +00:00
4ef6180455 2006-05-01 jrandom
* Adjust the tunnel build timeouts to cut down on expirations, and
      increased the SSU connection establishment retransmission rate to
      something less glacial.
    * For the first 5 minutes of uptime, be less aggressive with tunnel
      exploration, opting for more reliable peers to start with.
2006-05-01 22:40:21 +00:00
d4970e23c0 2006-05-01 jrandom
* Fix for a netDb lookup race (thanks cervantes!)
2006-05-01 19:09:02 +00:00
0c9f165016 fix typos 2006-05-01 15:39:37 +00:00
be3a899ecb 2006-04-27 jrandom
* Avoid a race in the message reply registry (thanks cervantes!)
2006-04-28 00:31:20 +00:00
7a6a749004 2006-04-27 jrandom
* Fixed the tunnel expiration desync code (thanks Complication!)
2006-04-28 00:08:40 +00:00
17271ee3f0 2006-04-25 Complication
* weekly news.xml update
2006-04-26 02:30:05 +00:00
99bcfa90df 2006-04-24 Complication
* Update news.xml to reflect 0.6.1.17
2006-04-24 12:43:25 +00:00
eb36e993c1 * 2006-04-23 0.6.1.17 released 2006-04-23 21:06:12 +00:00
zzz
e5eca5fa45 zzz update 2006-04-22 20:37:21 +00:00
8cba2f4236 2006-04-19 jrandom
* Adjust how we pick high capacity peers to allow the inclusion of fast
      peers (the previous filter assumed an old usage pattern)
    * New set of stats to help track per-packet-type bandwidth usage better
    * Cut out the proactive tail drop from the SSU transport, for now
    * Reduce the frequency of tunnel build attempts while we're saturated
    * Don't drop tunnel requests as easily - prefer to explicitly reject them
2006-04-19 17:46:51 +00:00
40d5ed31ac 2006-04-15 Complication
* Update news.xml to reflect 0.6.1.16
2006-04-15 17:25:50 +00:00
181275fe35 * 2006-04-15 0.6.1.16 released 2006-04-15 07:58:12 +00:00
23d8c01ce7 2006-04-15 jrandom
* Adjust the proactive tunnel request dropping so we will reject what we
      can instead of dropping so much (but still dropping if we get too far
      overloaded)
2006-04-15 07:15:19 +00:00
de83944486 2006-04-14 jrandom
* 0 isn't very random
    * Adjust the tunnel drop to be more reasonable
2006-04-14 20:24:07 +00:00
90cd7ff23a 2006-04-14 jrandom
* -28.00230115311259 is not between 0 and 1 in any universe I know.
    * Made the bw-related tunnel join throttle much simpler
2006-04-14 18:04:11 +00:00
8d0a9b4ccd 2006-04-14 jrandom
* Make some more stats graphable, and allow some internal tweaking on the
      tunnel pairing for creation and testing.
2006-04-14 11:40:35 +00:00
230d4cd23f * 2006-04-13 0.6.1.15 released 2006-04-13 12:40:21 +00:00
e9b6fcc0a4 2006-04-12 jrandom
* Added a further failsafe against trying to queue up too many messages to
      a peer.
2006-04-13 04:22:06 +00:00
8fcb871409 2006-04-12 jrandom
* Watch out for failed syndie index fetches (thanks bar!)
2006-04-12 06:49:01 +00:00
83bef43fd5 2006-04-11 jrandom
* Throttling improvements on SSU - throttle all transmissions to a peer
      when we are retransmitting, not just retransmissions.  Also, if
      we're already retransmitting to a peer, probabalistically tail drop new
      messages targetting that peer, based on the estimated wait time before
      transmission.
    * Fixed the rounding error in the inbound tunnel drop probability.
2006-04-11 13:39:06 +00:00
b4fc6ca31b 2006-04-10 jrandom
* Include a combined send/receive graph (good idea cervantes!)
    * Proactively drop inbound tunnel requests probabalistically as the
      estimated queue time approaches our limit, rather than letting them all
      through up to that limit.
2006-04-10 05:37:28 +00:00
ab3f1b708d 2006-04-08 jrandom
* Stat summarization fix (removing the occational holes in the jrobin
      graphs)
2006-04-09 01:14:08 +00:00
c76402a160 2006-04-08 jrandom
* Process inbound tunnel requests more efficiently
    * Proactively drop inbound tunnel requests if the queue before we'd
      process it in is too long (dynamically adjusted by cpu load)
    * Adjust the tunnel rejection throttle to reject requeusts when we have to
      proactively drop too many requests.
    * Display the number of pending inbound tunnel join requests on the router
      console (as the "handle backlog")
    * Include a few more stats in the default set of graphs
2006-04-08 06:15:43 +00:00
a50c73aa5e 2006-04-06 jrandom
* Fix for a bug in the new irc ping/pong filter (thanks Complication!)
2006-04-07 01:26:32 +00:00
5aa66795d2 2006-04-06 jrandom
* Fixed a typo in the reply cleanup code
2006-04-06 10:33:44 +00:00
ac3c2d2b15 * 2006-04-05 0.6.1.14 released 2006-04-05 17:08:04 +00:00
072a45e5ce 2006-04-05 jrandom
* Cut down on the time that we allow a tunnel creation request to sit by
      without response, and reject tunnel creation requests that are lagged
      locally.  Also switch to a bounded FIFO instead of a LIFO
    * Threading tweaks for the message handling (thanks bar!)
    * Don't add addresses to syndie with blank names (thanks Complication!)
    * Further ban clearance
2006-04-05 04:40:00 +00:00
1ab14e52d2 2006-04-04 Complication
* weekly news.xml update
2006-04-05 03:06:00 +00:00
9a820961a2 2006-04-05 jrandom
* Fix during the ssu handshake to avoid an unnecessary failure on
      packet retransmission (thanks ripple!)
    * Fix during the SSU handshake to use the negotiated session key asap,
      rather than using the intro key for more than we should (thanks ripple!)
    * Fixes to the message reply registry (thanks Complication!)
    * More comprehensive syndie banning (for repeated pushes)
    * Publish the router's ballpark bandwidth limit (w/in a power of 2), for
      testing purposes
    * Put a floor back on the capacity threshold, so too many failing peers
      won't cause us to pick very bad peers (unless we have very few good
      ones)
    * Bugfix to cut down on peers using introducers unneessarily (thanks
      Complication!)
    * Reduced the default streaming lib message size to fit into a single
      tunnel message, rather than require 5 tunnel messages to be transferred
      without loss before recomposition.  This reduces throughput, but should
      increase reliability, at least for the time being.
    * Misc small bugfixes in the router (thanks all!)
    * More tweaking for Syndie's CSS (thanks Doubtful Salmon!)
2006-04-04 12:20:32 +00:00
764149aef3 2006-04-01 jrandom
* Take out the router watchdog's teeth (don't restart on leaseset failure)
    * Filter the IRC ping/pong messages, as some clients send unsafe
      information in them (thanks aardvax and dust!)
2006-04-03 10:07:22 +00:00
1b3ad31bff 2006-04-01 jrandom
* Take out the router watchdog's teeth (don't restart on leaseset failure)
2006-04-01 19:05:35 +00:00
15e6c27c04 2006-03-30 jrandom
* Substantially reduced the lock contention in the message registry (a
      major hotspot that can choke most threads).  Also reworked the locking
      so we don't need per-message timer events
    * No need to have additional per-peer message clearing, as they are
      either unregistered individually or expired.
    * Include some of the more transient tunnel throttling
2006-03-30 07:26:43 +00:00
8b707e569f 2006-03-28 Complication
* weekly news.xml update
2006-03-29 02:09:23 +00:00
e4c4b24c61 2006-03-26 Complication
* announce 0.6.1.3
2006-03-27 03:24:38 +00:00
031636e607 * 2006-03-26 0.6.1.13 released 2006-03-26 23:23:49 +00:00
b5c0d77c69 2006-03-25 jrandom
* Added a simple purge and ban of syndie authors, shown as the
      "Purge and ban" button on the addressbook for authors that are already
      on the ignore list.  All of their entries and metadata are deleted from
      the archive, and the are transparently filtered from any remote
      syndication (so no user on the syndie instance will pull any new posts
      from them)
    * More strict tunnel join throtting when congested
2006-03-25 23:50:48 +00:00
d489caa88c 2006-03-24 jrandom
* Try to desync tunnel building near startup (thanks Complication!)
    * If we are highly congested, fall back on only querying the floodfill
      netDb peers, and only storing to those peers too
    * Cleaned up the floodfill-only queries
2006-03-24 20:53:28 +00:00
2a24029acf 2006-03-21 Complication
* Weekly news.xml update
2006-03-22 02:15:13 +00:00
c5aab8c750 2006-03-21 jrandom
* Avoid a very strange (unconfirmed) bug that people using the systray's
      browser picker dialog could cause by disabling the GUI-based browser
      picker.
    * Cut down on subsequent streaming lib reset packets transmitted
    * Use a larger MTU more often
    * Allow netDb searches to query shitlisted peers, as the queries are
      indirect.
    * Add an option to disable non-floodfill netDb searches (non-floodfill
      searches are used by default, but can be disabled by adding
      netDb.floodfillOnly=true to the advanced config)
2006-03-21 23:11:32 +00:00
343748111a 2006-03-20 jrandom
* Fix to allow for some slack when coalescing stats
    * Workaround some oddball errors
2006-03-20 05:39:54 +00:00
c5ddfabfe9 2006-03-20 jrandom
* Fix to allow for some slack when coalescing stats
    * Workaround some oddball errors
2006-03-20 05:31:09 +00:00
1ef33906ed 2006-03-18 jrandom
* Added a new graphs.jsp page to show all of the stats being harvested
2006-03-19 00:23:23 +00:00
f3849a22ad 2006-03-18 jrandom
* Made the netDb search load limitations a little less stringent
    * Add support for specifying the number of periods to be plotted on the
      graphs - e.g. to plot only the last hour of a stat that is averaged at
      the 60 second period, add &periodCount=60
2006-03-18 23:09:35 +00:00
b03ff21d3b 2006-03-17 jrandom
* Add support for graphing the event count as well as the average stat
      value (done by adding &showEvents=true to the URL).  Also supports
      hiding the legend (&hideLegend=true), the grid (&hideGrid=true), and
      the title (&hideTitle=true).
    * Removed an unnecessary arbitrary filter on the profile organizer so we
      can pick high capacity and fast peers more appropriately
2006-03-17 23:46:00 +00:00
52094b10c9 aych tee emm ell smells 2006-03-16 22:37:57 +00:00
fc927efaa3 2006-03-16 jrandom
* Integrate basic hooks for jrobin (http://jrobin.org) into the router
      console.  Selected stats can be harvested automatically and fed into
      in-memory RRD databases, and those databases can be served up either as
      PNG images or as RRDtool compatible XML dumps (see oldstats.jsp for
      details).  A base set of stats are harvested by default, but an
      alternate list can be specified by setting the 'stat.summaries' list on
      the advanced config.  For instance:
      stat.summaries=bw.recvRate.60000,bw.sendRate.60000
    * HTML tweaking for the general config page (thanks void!)
    * Odd NPE fix (thanks Complication!)
2006-03-16 21:52:09 +00:00
65dc803fb7 2006-03-16 jrandom
* Integrate basic hooks for jrobin (http://jrobin.org) into the router
      console.  Selected stats can be harvested automatically and fed into
      in-memory RRD databases, and those databases can be served up either as
      PNG images or as RRDtool compatible XML dumps (see oldstats.jsp for
      details).  A base set of stats are harvested by default, but an
      alternate list can be specified by setting the 'stat.summaries' list on
      the advanced config.  For instance:
      stat.summaries=bw.recvRate.60000,bw.sendRate.60000
    * HTML tweaking for the general config page (thanks void!)
    * Odd NPE fix (thanks Complication!)
2006-03-16 21:45:17 +00:00
349adf6690 2006-03-15 Complication
* Trim out an old, inactive IP second-guessing method
      (thanks for spotting, Anonymous!)
2006-03-16 00:49:22 +00:00
2c843fd818 2006-03-15 jrandom
* Further stat cleanup
    * Keep track of how many peers we are actively trying to communicate with,
      beyond those who are just trying to communicate with us.
    * Further router tunnel participation throttle revisions to avoid spurious
      rejections
    * Rate stat display cleanup (thanks ripple!)
    * Don't even try to send messages that have been queued too long
2006-03-15 22:36:10 +00:00
863b511cde 2006-03-15 jrandom
* Further stat cleanup
    * Keep track of how many peers we are actively trying to communicate with,
      beyond those who are just trying to communicate with us.
    * Further router tunnel participation throttle revisions to avoid spurious
      rejections
    * Rate stat display cleanup (thanks ripple!)
    * Don't even try to send messages that have been queued too long
2006-03-15 22:26:42 +00:00
zzz
c417e7c237 2006-03-14 zzz update 2006-03-15 06:02:07 +00:00
zzz
1822c0d7d8 2006-03-07 zzz update 2006-03-09 02:19:42 +00:00
zzz
94c1c32b51 2006-03-05 zzz
* Remove the +++--- from the logs on i2psnark startup
2006-03-06 01:57:47 +00:00
deb35f4af4 2006-03-05 jrandom
* HTML fixes in Syndie to work better with opera (thanks shaklen!)
    * Give netDb lookups to floodfill peers more time, as they are much more
      likely to succeed (thereby cutting down on the unnecessary netDb
      searches outside the floodfill set)
    * Fix to the SSU IP detection code so we won't use introducers when we
      don't need them (thanks Complication!)
    * Add a brief shitlist to i2psnark so it doesn't keep on trying to reach
      peers given to it
    * Don't let netDb searches wander across too many peers
    * Don't use the 1s bandwidth usage in the tunnel participation throttle,
      as its too volatile to have much meaning.
    * Don't bork if a Syndie post is missing an entry.sml
2006-03-05 17:07:07 +00:00
883150f943 2006-03-05 Complication
* Reduce exposed statistical information,
      to make build and uptime tracking more expensive
2006-03-05 07:44:59 +00:00
717d1b97b2 2006-03-04 Complication
* Fix the announce URL of orion's tracker in Snark sources
2006-03-04 23:50:01 +00:00
e62135eacc 2006-03-03 Complication
* Explicit check for an index out of bounds exception while parsing
      an inbound IRC command (implicit check was there already)
2006-03-04 03:04:06 +00:00
2c6d953359 2006-03-01 jrandom
* More aggressive tunnel throttling as we approach our bandwidth limit,
      and throttle based off periods wider than 1 second.
    * Included Doubtful Salmon's syndie stylings (thanks!)
2006-03-01 23:01:20 +00:00
zzz
2b79e2df3f 2006-02-28 zzz update 2006-03-01 04:11:16 +00:00
zzz
fab6e421b8 2006-02-27 zzz
* Update error page templates to add \r, Connection: close, and
      Proxy-connection: close.
2006-02-28 03:55:18 +00:00
589cbd675a * 2006-02-27 0.6.1.12 released
2006-02-27  jrandom
    * Adjust the jbigi.jar to use the athlon-optimized jbigi on windows/amd64
      machines, rather than the generic jbigi (until we have an athlon64
      optimized version)
2006-02-27 19:05:40 +00:00
c486f5980a * 2006-02-27 0.6.1.12 released
2006-02-27  jrandom
    * Adjust the jbigi.jar to use the athlon-optimized jbigi on windows/amd64
      machines, rather than the generic jbigi (until we have an athlon64
      optimized version)
2006-02-27 18:51:31 +00:00
eee21aa301 2006-02-26 jrandom
* Switch from the bouncycastle to the gnu-crypto implementation for
      SHA256, as benchmarks show a 10-30% speedup.
    * Removed some unnecessary object caches
    * Don't close i2psnark streams prematurely
2006-02-26 21:30:56 +00:00
zzz
a2854cf6f6 2006-02-25 zzz spelling fix 2006-02-25 21:51:46 +00:00
62b7cf64da 2006-02-25 jrandom
* Made the Syndie permalinks in the thread view point to the blog view
    * Disabled TCP again (since the live net seems to be doing well w/out it)
    * Fix the message time on inbound SSU establishment (thanks zzz!)
    * Don't be so aggressive with parallel tunnel creation when a tunnel pool
      just starts up
2006-02-25 20:41:51 +00:00
7b2a435aad 2006-02-24 jrandom
* Rounding calculation cleanup in the stats, and avoid an uncontested
      mutex (thanks ripple!)
    * SSU handshake cleanup to help force incompatible peers to stop nagging
      us by both not giving them an updated reference to us and by dropping
      future handshake packets from them.
2006-02-24 09:35:52 +00:00
3d8d21e543 2006-02-23 jrandom
* Increase the SSU retransmit ceiling (for slow links)
    * Estimate the sender's SSU MTU (to help see if we agree)
2006-02-23 14:38:39 +00:00
8b7958cff2 2006-02-22 jrandom
* Fix to properly profile tunnel joins (thanks Ragnarok, frosk, et al!)
    * More aggressive poor-man's PMTU, allowing larger MTUs on less reliable
      links
    * Further class validator refactorings
2006-02-23 08:08:37 +00:00
7bb792836d 2006-02-22 jrandom
* Fix to properly profile tunnel joins (thanks Ragnarok, frosk, et al!)
    * More aggressive poor-man's PMTU, allowing larger MTUs on less reliable
      links
    * Further class validator refactorings
2006-02-23 01:48:47 +00:00
03f509ca54 2006-02-22 jrandom
* Handle a rare race under high bandwidth situations in the SSU transport
    * Minor refactoring so we don't confuse sun's 1.6.0-b2 validator
2006-02-22 14:54:22 +00:00
5f05631936 2006-02-21 Complication
* Reactivate TCP tranport by default, in addition to re-allowing
2006-02-22 06:19:19 +00:00
zzz
5cfedd4c8b 2006-02-21 zzz update 2006-02-22 03:34:02 +00:00
zzz
269fec64a5 2006-02-21 zzz
announce 0.6.1.11
2006-02-21 20:12:14 +00:00
f63c6f4717 * 2006-02-21 0.6.1.11 released 2006-02-21 15:20:17 +00:00
b4c495531a 2006-02-21 jrandom
* Throttle the outbound SSU establishment queue, so it doesn't fill up the
      heap when backlogged (and so that the messages queued up on it don't sit
      there forever)
    * Further SSU memory cleanup
    * Clean up the address regeneration code so it knows when to rebuild the
      local info more precisely.
2006-02-21 13:31:18 +00:00
9990126e3e 2006-02-20 jrandom
* Throttle the outbound SSU establishment queue, so it doesn't fill up the
      heap when backlogged (and so that the messages queued up on it don't sit
      there forever)
    * Further SSU memory cleanup
2006-02-21 02:02:48 +00:00
ac8436a8eb 2006-02-20 jrandom
* Properly enable TCP this time (oops)
    * Deal with multiple form handlers on the same page in the console without
      being too annoying (thanks blubb and bd_!)
2006-02-20 18:12:47 +00:00
dee79dfb1c 2006-02-20 jrandom
* Reenable the TCP transport as a fallback (we'll continue to muck with
      debugging SSU-only elsewhere)
2006-02-20 16:42:29 +00:00
9b4e6f475d no need to include this stuff in the updates (they havent changed) 2006-02-20 14:28:09 +00:00
7672ba23d1 2006-02-20 jrandom
* Major SSU and router tuning to reduce contention, memory usage, and GC
      churn.  There are still issues to be worked out, but this should be a
      substantial improvement.
    * Modified the optional netDb harvester task to support choosing whether
      to use (non-anonymous) direct connections or (anonymous) exploratory
      tunnels to do the harvesting.  Harvesting itself is enabled via the
      advanced config "netDb.shouldHarvest=true" (default is false) and the
      connection type can be chosen via "netDb.harvestDirectly=false" (default
      is false).
2006-02-20 14:27:38 +00:00
4b77ddedcc 2006-02-20 jrandom
* Major SSU and router tuning to reduce contention, memory usage, and GC
      churn.  There are still issues to be worked out, but this should be a
      substantial improvement.
    * Modified the optional netDb harvester task to support choosing whether
      to use (non-anonymous) direct connections or (anonymous) exploratory
      tunnels to do the harvesting.  Harvesting itself is enabled via the
      advanced config "netDb.shouldHarvest=true" (default is false) and the
      connection type can be chosen via "netDb.harvestDirectly=false" (default
      is false).
2006-02-20 14:19:52 +00:00
222af6c090 * Added pruning of suckers history (it used to grow indefinitely). 2006-02-20 05:08:59 +00:00
8e879cb646 2006-02-19 jrandom
* Moved the current net's reseed URL to a different location than where
      the old net looks (dev.i2p.net/i2pdb2/ vs .../i2pdb/)
    * More aggressively expire inbound messages (on receive, not just on send)
    * Add in a hook for breaking backwards compatibility in the SSU wire
      protocol directly by including a version as part of the handshake.  The
      version is currently set to 0, however, so the wire protocol from this
      build is compatible with all earlier SSU implementations.
    * Increased the number of complete message readers, cutting down
      substantially on the delay processing inbound messages.
    * Delete the message history file on startup
    * Reworked the restart/shutdown display on the console (thanks bd_!)
2006-02-19 12:40:03 +00:00
65975df1be 2006-02-19 jrandom
* Moved the current net's reseed URL to a different location than where
      the old net looks (dev.i2p.net/i2pdb2/ vs .../i2pdb/)
    * More aggressively expire inbound messages (on receive, not just on send)
    * Add in a hook for breaking backwards compatibility in the SSU wire
      protocol directly by including a version as part of the handshake.  The
      version is currently set to 0, however, so the wire protocol from this
      build is compatible with all earlier SSU implementations.
    * Increased the number of complete message readers, cutting down
      substantially on the delay processing inbound messages.
    * Delete the message history file on startup
    * Reworked the restart/shutdown display on the console (thanks bd_!)
2006-02-19 12:29:57 +00:00
c94de2fbb5 2006-02-18 jrandom
* Migrate the outbound packets from a central component to the individual
      per-peer components, substantially cutting down on lock contention when
      dealing with higher degrees.
    * Load balance the outbound SSU transfers evenly across peers, rather than
      across messages (so peers with few messages won't be starved by peers
      with many).
    * Reduce the frequency of router info rebuilds (thanks bar!)
2006-02-19 03:58:08 +00:00
5aa335740a 2006-02-18 jrandom
* Migrate the outbound packets from a central component to the individual
      per-peer components, substantially cutting down on lock contention when
      dealing with higher degrees.
    * Load balance the outbound SSU transfers evenly across peers, rather than
      across messages (so peers with few messages won't be starved by peers
      with many).
    * Reduce the frequency of router info rebuilds (thanks bar!)
2006-02-19 03:22:31 +00:00
1202751359 2006-02-18 jrandom
* Add a new AIMD throttle in SSU to control the number of concurrent
      messages being sent to a given peer, in addition to the throttle on the
      number of concurrent bytes to that peer.
    * Adjust the existing SSU outbound queue to throttle based on the queue's
      lag, not an arbitrary number of packets.
2006-02-18 06:48:48 +00:00
34fcf53d87 2006-02-18 jrandom
* Add a new AIMD throttle in SSU to control the number of concurrent
      messages being sent to a given peer, in addition to the throttle on the
      number of concurrent bytes to that peer.
    * Adjust the existing SSU outbound queue to throttle based on the queue's
      lag, not an arbitrary number of packets.
2006-02-18 06:38:30 +00:00
9ddc632b9f 2006-02-17 jrandom
* Properly fix the build request queue throttling, using queue age to
      detect congestion, rather than queue size.
2006-02-17 22:29:28 +00:00
941b65eb32 2006-02-17 jrandom
* Disable the message history log file by default (duh - feel free to
      delete messageHistory.txt after upgrading.  thanks deathfatty!)
    * Limit the size of the inbound tunnel build request queue so we don't
      get an insane backlog of requests that we're bound to reject, and adjust
      the queue processing so we keep on churning through them when we've got
      a backlog.
    * Small fixes for the multiuser syndie operation (thanks Complication!)
    * Renamed modified PRNG classes that were imported from gnu-crypto so we
      don't conflict with JVMs using that as a JCE provider (thanks blx!)
2006-02-17 09:16:00 +00:00
8c9167464b 2006-02-17 jrandom
* Disable the message history log file by default (duh - feel free to
      delete messageHistory.txt after upgrading.  thanks deathfatty!)
    * Limit the size of the inbound tunnel build request queue so we don't
      get an insane backlog of requests that we're bound to reject, and adjust
      the queue processing so we keep on churning through them when we've got
      a backlog.
    * Small fixes for the multiuser syndie operation (thanks Complication!)
    * Renamed modified PRNG classes that were imported from gnu-crypto so we
      don't conflict with JVMs using that as a JCE provider (thanks blx!)
2006-02-17 09:07:53 +00:00
201 changed files with 13707 additions and 3420 deletions

View File

@ -21,11 +21,12 @@ NATIVE_DIR=native
# router.jar: full I2P router
# jbigi.jar: collection of native optimized GMP routines for crypto
JAR_BASE=i2p.jar mstreaming.jar streaming.jar
JAR_CLIENTS=i2ptunnel.jar sam.jar i2psnark.jar
JAR_CLIENTS=i2ptunnel.jar sam.jar
JAR_ROUTER=router.jar
JAR_JBIGI=jbigi.jar
JAR_XML=xml-apis.jar resolver.jar xercesImpl.jar
JAR_CONSOLE=\
i2psnark.jar \
javax.servlet.jar \
commons-el.jar \
commons-logging.jar \
@ -79,15 +80,15 @@ native_clean:
native_shared: libi2p.so
@cd build ; ${GCJ} ${OPTIMIZE} -fjni -L../${NATIVE_DIR} -li2p ${SYSTEM_PROPS} -o ../${NATIVE_DIR}/i2p_dsa --main=net.i2p.crypto.DSAEngine
@echo "* i2p_dsa is a simple test app with the DSA engine and Fortuna PRNG to make sure crypto is working"
@cd build ; ${GCJ} ${OPTIMIZE} -fjni -L../${NATIVE_DIR} -li2p ${SYSTEM_PROPS} -o ../${NATIVE_DIR}/prng --main=gnu.crypto.prng.Fortuna
@cd build ; ${GCJ} ${OPTIMIZE} -fjni -L../${NATIVE_DIR} -li2p ${SYSTEM_PROPS} -o ../${NATIVE_DIR}/prng --main=gnu.crypto.prng.FortunaStandalone
@cd build ; ${GCJ} ${OPTIMIZE} -fjni -L../${NATIVE_DIR} -li2p ${SYSTEM_PROPS} -o ../${NATIVE_DIR}/i2ptunnel --main=net.i2p.i2ptunnel.I2PTunnel
@echo "* i2ptunnel is mihi's I2PTunnel CLI"
@echo " run it as ./i2ptunnel -cli to avoid awt complaints"
@cd build ; ${GCJ} ${OPTIMIZE} -fjni -L../${NATIVE_DIR} -li2p ${SYSTEM_PROPS} -o ../${NATIVE_DIR}/i2ptunnelctl --main=net.i2p.i2ptunnel.TunnelControllerGroup
@echo "* i2ptunnelctl is a controller for I2PTunnel, reading i2ptunnel.config"
@echo " and launching the appropriate proxies"
@cd build ; ${GCJ} ${OPTIMIZE} -fjni -L../${NATIVE_DIR} -li2p ${SYSTEM_PROPS} -o ../${NATIVE_DIR}/i2psnark --main=org.klomp.snark.Snark
@echo "* i2psnark is an anonymous bittorrent client"
#@cd build ; ${GCJ} ${OPTIMIZE} -fjni -L../${NATIVE_DIR} -li2p ${SYSTEM_PROPS} -o ../${NATIVE_DIR}/i2psnark --main=org.klomp.snark.Snark
#@echo "* i2psnark is an anonymous bittorrent client"
@cd build ; ${GCJ} ${OPTIMIZE} -fjni -L../${NATIVE_DIR} -li2p ${SYSTEM_PROPS} -o ../${NATIVE_DIR}/i2prouter --main=net.i2p.router.Router
@echo "* i2prouter is the main I2P router"
@echo " it can be used, and while the router console won't load,"
@ -95,6 +96,6 @@ native_shared: libi2p.so
libi2p.so:
@echo "* Building libi2p.so"
@(cd build ; ${GCJ} ${OPTIMIZE} -fPIC -fjni -shared -o ../${NATIVE_DIR}/libi2p.so ${LIBI2P_JARS} ; cd .. )
@(cd build ; time ${GCJ} ${OPTIMIZE} -fPIC -fjni -shared -o ../${NATIVE_DIR}/libi2p.so ${LIBI2P_JARS} ; cd .. )
@ls -l ${NATIVE_DIR}/libi2p.so
@echo "* libi2p.so built"

View File

@ -10,6 +10,7 @@ import net.i2p.client.streaming.I2PSocket;
import net.i2p.client.streaming.I2PSocketManager;
import net.i2p.client.streaming.I2PSocketManagerFactory;
import net.i2p.util.Log;
import net.i2p.util.SimpleTimer;
import java.io.*;
import java.util.*;
@ -31,6 +32,7 @@ public class I2PSnarkUtil {
private Map _opts;
private I2PSocketManager _manager;
private boolean _configured;
private Set _shitlist;
private I2PSnarkUtil() {
_context = I2PAppContext.getGlobalContext();
@ -38,6 +40,7 @@ public class I2PSnarkUtil {
_opts = new HashMap();
setProxy("127.0.0.1", 4444);
setI2CPConfig("127.0.0.1", 7654, null);
_shitlist = new HashSet(64);
_configured = false;
}
@ -93,9 +96,9 @@ public class I2PSnarkUtil {
if (opts.getProperty("i2p.streaming.inactivityTimeout") == null)
opts.setProperty("i2p.streaming.inactivityTimeout", "90000");
if (opts.getProperty("i2p.streaming.inactivityAction") == null)
opts.setProperty("i2p.streaming.inactivityAction", "1");
if (opts.getProperty("i2p.streaming.writeTimeout") == null)
opts.setProperty("i2p.streaming.writeTimeout", "90000");
opts.setProperty("i2p.streaming.inactivityAction", "2"); // 1 == disconnect, 2 == ping
//if (opts.getProperty("i2p.streaming.writeTimeout") == null)
// opts.setProperty("i2p.streaming.writeTimeout", "90000");
//if (opts.getProperty("i2p.streaming.readTimeout") == null)
// opts.setProperty("i2p.streaming.readTimeout", "120000");
_manager = I2PSocketManagerFactory.createManager(_i2cpHost, _i2cpPort, opts);
@ -110,18 +113,36 @@ public class I2PSnarkUtil {
public void disconnect() {
I2PSocketManager mgr = _manager;
_manager = null;
_shitlist.clear();
mgr.destroySocketManager();
}
/** connect to the given destination */
I2PSocket connect(PeerID peer) throws IOException {
Hash dest = peer.getAddress().calculateHash();
synchronized (_shitlist) {
if (_shitlist.contains(dest))
throw new IOException("Not trying to contact " + dest.toBase64() + ", as they are shitlisted");
}
try {
return _manager.connect(peer.getAddress());
I2PSocket rv = _manager.connect(peer.getAddress());
if (rv != null) synchronized (_shitlist) { _shitlist.remove(dest); }
return rv;
} catch (I2PException ie) {
synchronized (_shitlist) {
_shitlist.add(dest);
}
SimpleTimer.getInstance().addEvent(new Unshitlist(dest), 10*60*1000);
throw new IOException("Unable to reach the peer " + peer + ": " + ie.getMessage());
}
}
private class Unshitlist implements SimpleTimer.TimedEvent {
private Hash _dest;
public Unshitlist(Hash dest) { _dest = dest; }
public void timeReached() { synchronized (_shitlist) { _shitlist.remove(_dest); } }
}
/**
* fetch the given URL, returning the file it is stored in, or null on error
*/

View File

@ -633,11 +633,11 @@ public class Snark
boolean allocating = false;
public void storageCreateFile(Storage storage, String name, long length)
{
if (allocating)
System.out.println(); // Done with last file.
//if (allocating)
// System.out.println(); // Done with last file.
System.out.print("Creating file '" + name
+ "' of length " + length + ": ");
//System.out.print("Creating file '" + name
// + "' of length " + length + ": ");
allocating = true;
}
@ -647,10 +647,10 @@ public class Snark
public void storageAllocated(Storage storage, long length)
{
allocating = true;
System.out.print(".");
//System.out.print(".");
allocated += length;
if (allocated == meta.getTotalLength())
System.out.println(); // We have all the disk space we need.
//if (allocated == meta.getTotalLength())
// System.out.println(); // We have all the disk space we need.
}
boolean allChecked = false;
@ -664,26 +664,21 @@ public class Snark
// Use the MetaInfo from the storage since our own might not
// yet be setup correctly.
MetaInfo meta = storage.getMetaInfo();
if (meta != null)
System.out.print("Checking existing "
+ meta.getPieces()
+ " pieces: ");
//if (meta != null)
// System.out.print("Checking existing "
// + meta.getPieces()
// + " pieces: ");
checking = true;
}
if (checking)
if (checked)
System.out.print("+");
else
System.out.print("-");
else
if (!checking)
Snark.debug("Got " + (checked ? "" : "BAD ") + "piece: " + num,
Snark.INFO);
}
public void storageAllChecked(Storage storage)
{
if (checking)
System.out.println();
//if (checking)
// System.out.println();
allChecked = true;
checking = false;
@ -693,7 +688,7 @@ public class Snark
{
Snark.debug("Completely received " + torrent, Snark.INFO);
//storage.close();
System.out.println("Completely received: " + torrent);
//System.out.println("Completely received: " + torrent);
if (completeListener != null)
completeListener.torrentComplete(this);
}

View File

@ -464,7 +464,7 @@ public class SnarkManager implements Snark.CompleteListener {
private static final String DEFAULT_TRACKERS[] = {
"Postman's tracker", "http://YRgrgTLGnbTq2aZOZDJQ~o6Uk5k6TK-OZtx0St9pb0G-5EGYURZioxqYG8AQt~LgyyI~NCj6aYWpPO-150RcEvsfgXLR~CxkkZcVpgt6pns8SRc3Bi-QSAkXpJtloapRGcQfzTtwllokbdC-aMGpeDOjYLd8b5V9Im8wdCHYy7LRFxhEtGb~RL55DA8aYOgEXcTpr6RPPywbV~Qf3q5UK55el6Kex-6VCxreUnPEe4hmTAbqZNR7Fm0hpCiHKGoToRcygafpFqDw5frLXToYiqs9d4liyVB-BcOb0ihORbo0nS3CLmAwZGvdAP8BZ7cIYE3Z9IU9D1G8JCMxWarfKX1pix~6pIA-sp1gKlL1HhYhPMxwyxvuSqx34o3BqU7vdTYwWiLpGM~zU1~j9rHL7x60pVuYaXcFQDR4-QVy26b6Pt6BlAZoFmHhPcAuWfu-SFhjyZYsqzmEmHeYdAwa~HojSbofg0TMUgESRXMw6YThK1KXWeeJVeztGTz25sL8AAAA.i2p/announce.php"
, "Orion's tracker", "http://gKik1lMlRmuroXVGTZ~7v4Vez3L3ZSpddrGZBrxVriosCQf7iHu6CIk8t15BKsj~P0JJpxrofeuxtm7SCUAJEr0AIYSYw8XOmp35UfcRPQWyb1LsxUkMT4WqxAT3s1ClIICWlBu5An~q-Mm0VFlrYLIPBWlUFnfPR7jZ9uP5ZMSzTKSMYUWao3ejiykr~mtEmyls6g-ZbgKZawa9II4zjOy-hdxHgP-eXMDseFsrym4Gpxvy~3Fv9TuiSqhpgm~UeTo5YBfxn6~TahKtE~~sdCiSydqmKBhxAQ7uT9lda7xt96SS09OYMsIWxLeQUWhns-C~FjJPp1D~IuTrUpAFcVEGVL-BRMmdWbfOJEcWPZ~CBCQSO~VkuN1ebvIOr9JBerFMZSxZtFl8JwcrjCIBxeKPBmfh~xYh16BJm1BBBmN1fp2DKmZ2jBNkAmnUbjQOqWvUcehrykWk5lZbE7bjJMDFH48v3SXwRuDBiHZmSbsTY6zhGY~GkMQHNGxPMMSIAAAA.i2p/bt"
, "Orion's tracker", "http://gKik1lMlRmuroXVGTZ~7v4Vez3L3ZSpddrGZBrxVriosCQf7iHu6CIk8t15BKsj~P0JJpxrofeuxtm7SCUAJEr0AIYSYw8XOmp35UfcRPQWyb1LsxUkMT4WqxAT3s1ClIICWlBu5An~q-Mm0VFlrYLIPBWlUFnfPR7jZ9uP5ZMSzTKSMYUWao3ejiykr~mtEmyls6g-ZbgKZawa9II4zjOy-hdxHgP-eXMDseFsrym4Gpxvy~3Fv9TuiSqhpgm~UeTo5YBfxn6~TahKtE~~sdCiSydqmKBhxAQ7uT9lda7xt96SS09OYMsIWxLeQUWhns-C~FjJPp1D~IuTrUpAFcVEGVL-BRMmdWbfOJEcWPZ~CBCQSO~VkuN1ebvIOr9JBerFMZSxZtFl8JwcrjCIBxeKPBmfh~xYh16BJm1BBBmN1fp2DKmZ2jBNkAmnUbjQOqWvUcehrykWk5lZbE7bjJMDFH48v3SXwRuDBiHZmSbsTY6zhGY~GkMQHNGxPMMSIAAAA.i2p/bt/announce.php"
// , "The freak's tracker", "http://mHKva9x24E5Ygfey2llR1KyQHv5f8hhMpDMwJDg1U-hABpJ2NrQJd6azirdfaR0OKt4jDlmP2o4Qx0H598~AteyD~RJU~xcWYdcOE0dmJ2e9Y8-HY51ie0B1yD9FtIV72ZI-V3TzFDcs6nkdX9b81DwrAwwFzx0EfNvK1GLVWl59Ow85muoRTBA1q8SsZImxdyZ-TApTVlMYIQbdI4iQRwU9OmmtefrCe~ZOf4UBS9-KvNIqUL0XeBSqm0OU1jq-D10Ykg6KfqvuPnBYT1BYHFDQJXW5DdPKwcaQE4MtAdSGmj1epDoaEBUa9btQlFsM2l9Cyn1hzxqNWXELmx8dRlomQLlV4b586dRzW~fLlOPIGC13ntPXogvYvHVyEyptXkv890jC7DZNHyxZd5cyrKC36r9huKvhQAmNABT2Y~pOGwVrb~RpPwT0tBuPZ3lHYhBFYmD8y~AOhhNHKMLzea1rfwTvovBMByDdFps54gMN1mX4MbCGT4w70vIopS9yAAAA.i2p/bytemonsoon/announce.php"
};

View File

@ -127,7 +127,7 @@ public class I2PSnarkServlet extends HttpServlet {
}
} else if ( (newURL != null) && (newURL.trim().length() > "http://.i2p/".length()) ) {
_manager.addMessage("Fetching " + newURL);
I2PThread fetch = new I2PThread(new FetchAndAdd(newURL), "Fetch and add");
I2PThread fetch = new I2PThread(new FetchAndAdd(_manager, newURL), "Fetch and add");
fetch.start();
} else {
// no file or URL specified
@ -267,56 +267,6 @@ public class I2PSnarkServlet extends HttpServlet {
}
}
private class FetchAndAdd implements Runnable {
private String _url;
public FetchAndAdd(String url) {
_url = url;
}
public void run() {
_url = _url.trim();
File file = I2PSnarkUtil.instance().get(_url, false);
try {
if ( (file != null) && (file.exists()) && (file.length() > 0) ) {
_manager.addMessage("Torrent fetched from " + _url);
FileInputStream in = null;
try {
in = new FileInputStream(file);
MetaInfo info = new MetaInfo(in);
String name = info.getName();
name = name.replace('/', '_');
name = name.replace('\\', '_');
name = name.replace('&', '+');
name = name.replace('\'', '_');
name = name.replace('"', '_');
name = name.replace('`', '_');
name = name + ".torrent";
File torrentFile = new File(_manager.getDataDir(), name);
String canonical = torrentFile.getCanonicalPath();
if (torrentFile.exists()) {
if (_manager.getTorrent(canonical) != null)
_manager.addMessage("Torrent already running: " + name);
else
_manager.addMessage("Torrent already in the queue: " + name);
} else {
FileUtil.copy(file.getAbsolutePath(), canonical, true);
_manager.addTorrent(canonical);
}
} catch (IOException ioe) {
_manager.addMessage("Torrent at " + _url + " was not valid: " + ioe.getMessage());
} finally {
try { in.close(); } catch (IOException ioe) {}
}
} else {
_manager.addMessage("Torrent was not retrieved from " + _url);
}
} finally {
if (file != null) file.delete();
}
}
}
private List getSortedSnarks(HttpServletRequest req) {
Set files = _manager.listTorrentFiles();
TreeSet fileNames = new TreeSet(files); // sorts it alphabetically
@ -635,4 +585,57 @@ public class I2PSnarkServlet extends HttpServlet {
private static final String TABLE_FOOTER = "</table>\n";
private static final String FOOTER = "</body></html>";
}
}
class FetchAndAdd implements Runnable {
private SnarkManager _manager;
private String _url;
public FetchAndAdd(SnarkManager mgr, String url) {
_manager = mgr;
_url = url;
}
public void run() {
_url = _url.trim();
File file = I2PSnarkUtil.instance().get(_url, false);
try {
if ( (file != null) && (file.exists()) && (file.length() > 0) ) {
_manager.addMessage("Torrent fetched from " + _url);
FileInputStream in = null;
try {
in = new FileInputStream(file);
MetaInfo info = new MetaInfo(in);
String name = info.getName();
name = name.replace('/', '_');
name = name.replace('\\', '_');
name = name.replace('&', '+');
name = name.replace('\'', '_');
name = name.replace('"', '_');
name = name.replace('`', '_');
name = name + ".torrent";
File torrentFile = new File(_manager.getDataDir(), name);
String canonical = torrentFile.getCanonicalPath();
if (torrentFile.exists()) {
if (_manager.getTorrent(canonical) != null)
_manager.addMessage("Torrent already running: " + name);
else
_manager.addMessage("Torrent already in the queue: " + name);
} else {
FileUtil.copy(file.getAbsolutePath(), canonical, true);
_manager.addTorrent(canonical);
}
} catch (IOException ioe) {
_manager.addMessage("Torrent at " + _url + " was not valid: " + ioe.getMessage());
} finally {
try { in.close(); } catch (IOException ioe) {}
}
} else {
_manager.addMessage("Torrent was not retrieved from " + _url);
}
} finally {
if (file != null) file.delete();
}
}
}

View File

@ -5,6 +5,7 @@ import java.net.Socket;
import java.util.ArrayList;
import java.util.List;
import java.util.StringTokenizer;
import java.lang.IndexOutOfBoundsException;
import net.i2p.I2PAppContext;
import net.i2p.client.streaming.I2PSocket;
@ -43,7 +44,7 @@ public class I2PTunnelIRCClient extends I2PTunnelClientBase implements Runnable
l,
notifyThis,
"IRCHandler " + (++__clientId), tunnel);
StringTokenizer tok = new StringTokenizer(destinations, ",");
dests = new ArrayList(1);
while (tok.hasMoreTokens()) {
@ -80,9 +81,10 @@ public class I2PTunnelIRCClient extends I2PTunnelClientBase implements Runnable
try {
i2ps = createI2PSocket(dest);
i2ps.setReadTimeout(readTimeout);
Thread in = new I2PThread(new IrcInboundFilter(s,i2ps));
StringBuffer expectedPong = new StringBuffer();
Thread in = new I2PThread(new IrcInboundFilter(s,i2ps, expectedPong));
in.start();
Thread out = new I2PThread(new IrcOutboundFilter(s,i2ps));
Thread out = new I2PThread(new IrcOutboundFilter(s,i2ps, expectedPong));
out.start();
} catch (Exception ex) {
if (_log.shouldLog(Log.ERROR))
@ -118,10 +120,12 @@ public class I2PTunnelIRCClient extends I2PTunnelClientBase implements Runnable
private Socket local;
private I2PSocket remote;
private StringBuffer expectedPong;
IrcInboundFilter(Socket _local, I2PSocket _remote) {
IrcInboundFilter(Socket _local, I2PSocket _remote, StringBuffer pong) {
local=_local;
remote=_remote;
expectedPong=pong;
}
public void run() {
@ -146,7 +150,9 @@ public class I2PTunnelIRCClient extends I2PTunnelClientBase implements Runnable
break;
if(inmsg.endsWith("\r"))
inmsg=inmsg.substring(0,inmsg.length()-1);
String outmsg = inboundFilter(inmsg);
if (_log.shouldLog(Log.DEBUG))
_log.debug("in: [" + inmsg + "]");
String outmsg = inboundFilter(inmsg, expectedPong);
if(outmsg!=null)
{
if(!inmsg.equals(outmsg)) {
@ -188,10 +194,12 @@ public class I2PTunnelIRCClient extends I2PTunnelClientBase implements Runnable
private Socket local;
private I2PSocket remote;
private StringBuffer expectedPong;
IrcOutboundFilter(Socket _local, I2PSocket _remote) {
IrcOutboundFilter(Socket _local, I2PSocket _remote, StringBuffer pong) {
local=_local;
remote=_remote;
expectedPong=pong;
}
public void run() {
@ -216,7 +224,9 @@ public class I2PTunnelIRCClient extends I2PTunnelClientBase implements Runnable
break;
if(inmsg.endsWith("\r"))
inmsg=inmsg.substring(0,inmsg.length()-1);
String outmsg = outboundFilter(inmsg);
if (_log.shouldLog(Log.DEBUG))
_log.debug("out: [" + inmsg + "]");
String outmsg = outboundFilter(inmsg, expectedPong);
if(outmsg!=null)
{
if(!inmsg.equals(outmsg)) {
@ -255,7 +265,7 @@ public class I2PTunnelIRCClient extends I2PTunnelClientBase implements Runnable
*
*/
public static String inboundFilter(String s) {
public String inboundFilter(String s, StringBuffer expectedPong) {
String field[]=s.split(" ",4);
String command;
@ -263,8 +273,8 @@ public class I2PTunnelIRCClient extends I2PTunnelClientBase implements Runnable
final String[] allowedCommands =
{
"NOTICE",
"PING",
"PONG",
//"PING",
//"PONG",
"MODE",
"JOIN",
"NICK",
@ -277,9 +287,14 @@ public class I2PTunnelIRCClient extends I2PTunnelClientBase implements Runnable
if(field[0].charAt(0)==':')
idx++;
command = field[idx++];
try { command = field[idx++]; }
catch (IndexOutOfBoundsException ioobe) // wtf, server sent borked command?
{
_log.warn("Dropping defective message: index out of bounds while extracting command.");
return null;
}
idx++; //skip victim
// Allow numerical responses
@ -287,6 +302,21 @@ public class I2PTunnelIRCClient extends I2PTunnelClientBase implements Runnable
new Integer(command);
return s;
} catch(NumberFormatException nfe){}
if ("PING".equals(command))
return "PING 127.0.0.1"; // no way to know what the ircd to i2ptunnel server con is, so localhost works
if ("PONG".equals(command)) {
// Turn the received ":irc.freshcoffee.i2p PONG irc.freshcoffee.i2p :127.0.0.1"
// into ":127.0.0.1 PONG 127.0.0.1 " so that the caller can append the client's extra parameter
// though, does 127.0.0.1 work for irc clients connecting remotely? and for all of them? sure would
// be great if irc clients actually followed the RFCs here, but i guess thats too much to ask.
// If we haven't PINGed them, or the PING we sent isn't something we know how to filter, this
// is blank.
String pong = expectedPong.length() > 0 ? expectedPong.toString() : null;
expectedPong.setLength(0);
return pong;
}
// Allow all allowedCommands
for(int i=0;i<allowedCommands.length;i++) {
@ -318,14 +348,13 @@ public class I2PTunnelIRCClient extends I2PTunnelClientBase implements Runnable
return null;
}
public static String outboundFilter(String s) {
public String outboundFilter(String s, StringBuffer expectedPong) {
String field[]=s.split(" ",3);
String command;
final String[] allowedCommands =
{
"NOTICE",
"PONG",
"MODE",
"JOIN",
"NICK",
@ -339,7 +368,8 @@ public class I2PTunnelIRCClient extends I2PTunnelClientBase implements Runnable
"MAP", // seems safe enough, the ircd should protect themselves though
"PART",
"OPER",
"PING",
// "PONG", // replaced with a filtered PING/PONG since some clients send the server IP (thanks aardvax!)
// "PING",
"KICK",
"HELPME",
"RULES",
@ -355,6 +385,43 @@ public class I2PTunnelIRCClient extends I2PTunnelClientBase implements Runnable
command = field[0].toUpperCase();
if ("PING".equals(command)) {
// Most clients just send a PING and are happy with any old PONG. Others,
// like BitchX, actually expect certain behavior. It sends two different pings:
// "PING :irc.freshcoffee.i2p" and "PING 1234567890 127.0.0.1" (where the IP is the proxy)
// the PONG to the former seems to be "PONG 127.0.0.1", while the PONG to the later is
// ":irc.freshcoffee.i2p PONG irc.freshcoffe.i2p :1234567890".
// We don't want to send them our proxy's IP address, so we need to rewrite the PING
// sent to the server, but when we get a PONG back, use what we expected, rather than
// what they sent.
//
// Yuck.
String rv = null;
expectedPong.setLength(0);
if (field.length == 1) { // PING
rv = "PING";
expectedPong.append("PONG 127.0.0.1");
} else if (field.length == 2) { // PING nonce
rv = "PING " + field[1];
expectedPong.append("PONG ").append(field[1]);
} else if (field.length == 3) { // PING nonce serverLocation
rv = "PING " + field[1];
expectedPong.append("PONG ").append(field[1]);
} else {
if (_log.shouldLog(Log.ERROR))
_log.error("IRC client sent a PING we don't understand, filtering it (\"" + s + "\")");
rv = null;
}
if (_log.shouldLog(Log.WARN))
_log.warn("sending ping " + rv + ", waiting for " + expectedPong + " orig was [" + s + "]");
return rv;
}
if ("PONG".equals(command))
return "PONG 127.0.0.1"; // no way to know what the ircd to i2ptunnel server con is, so localhost works
// Allow all allowedCommands
for(int i=0;i<allowedCommands.length;i++)
{

View File

@ -250,6 +250,10 @@ public class I2PTunnelRunner extends I2PThread implements I2PSocket.SocketErrorL
+ from + " and " + to);
}
// boo, hiss! shouldn't need this - the streaming lib should be configurable, but
// somehow the inactivity timer is sometimes failing to get triggered properly
//i2ps.setReadTimeout(2*60*1000);
ByteArray ba = _cache.acquire();
byte[] buffer = ba.getData(); // new byte[NETWORK_BUFFER_SIZE];
try {

View File

@ -190,6 +190,7 @@ public class IndexBean {
}
private String saveChanges() {
// Get current tunnel controller
TunnelController cur = getController(_tunnel);
Properties config = getConfig();
@ -205,21 +206,28 @@ public class IndexBean {
} else {
cur.setConfig(config, "");
}
if ("ircclient".equals(cur.getType()) ||
"httpclient".equals(cur.getType()) ||
"client".equals(cur.getType())) {
// all clients use the same I2CP session, and as such, use the same
// I2CP options
// Only modify other shared tunnels
// if the current tunnel is shared, and of supported type
if ("true".equalsIgnoreCase(cur.getSharedClient()) &&
("ircclient".equals(cur.getType()) ||
"httpclient".equals(cur.getType()) ||
"client".equals(cur.getType()))) {
// all clients use the same I2CP session, and as such, use the same I2CP options
List controllers = _group.getControllers();
for (int i = 0; i < controllers.size(); i++) {
TunnelController c = (TunnelController)controllers.get(i);
// Current tunnel modified by user, skip
if (c == cur) continue;
//only change when they really are declared of beeing a sharedClient
if (("httpclient".equals(c.getType()) ||
"ircclient".equals(c.getType())||
"client".equals(c.getType())
) && "true".equalsIgnoreCase(c.getSharedClient())) {
// Only modify this non-current tunnel
// if it belongs to a shared destination, and is of supported type
if ("true".equalsIgnoreCase(c.getSharedClient()) &&
("httpclient".equals(c.getType()) ||
"ircclient".equals(c.getType()) ||
"client".equals(c.getType()))) {
Properties cOpt = c.getConfig("");
if (_tunnelQuantity != null) {
cOpt.setProperty("option.inbound.quantity", _tunnelQuantity);

Binary file not shown.

View File

@ -25,6 +25,7 @@
<pathelement location="../../systray/java/build/systray.jar" />
<pathelement location="../../systray/java/lib/systray4j.jar" />
<pathelement location="../../../installer/lib/wrapper/win32/wrapper.jar" /> <!-- we dont care if we're not on win32 -->
<pathelement location="../../jrobin/jrobin-1.4.0.jar" />
</classpath>
</javac>
</target>
@ -34,6 +35,12 @@
<attribute name="Class-Path" value="i2p.jar router.jar" />
</manifest>
</jar>
<delete dir="./tmpextract" />
<unjar src="../../jrobin/jrobin-1.4.0.jar" dest="./tmpextract" />
<jar destfile="./build/routerconsole.jar" basedir="./tmpextract" update="true" />
<delete dir="./tmpextract" />
<ant target="war" />
</target>
<target name="war" depends="precompilejsp">
@ -60,6 +67,7 @@
<pathelement location="../../systray/java/lib/systray4j.jar" />
<pathelement location="../../../installer/lib/wrapper/win32/wrapper.jar" />
<pathelement location="build/routerconsole.jar" />
<pathelement location="build/" />
<pathelement location="../../../router/java/build/router.jar" />
<pathelement location="../../../core/java/build/i2p.jar" />
</classpath>
@ -86,6 +94,7 @@
<pathelement location="../../systray/java/lib/systray4j.jar" />
<pathelement location="../../../installer/lib/wrapper/win32/wrapper.jar" />
<pathelement location="build/routerconsole.jar" />
<pathelement location="build" />
<pathelement location="../../../router/java/build/router.jar" />
<pathelement location="../../../core/java/build/i2p.jar" />
</classpath>

View File

@ -30,7 +30,6 @@ import net.i2p.router.web.ConfigServiceHandler.UpdateWrapperManagerAndRekeyTask;
*/
public class ConfigNetHandler extends FormHandler {
private String _hostname;
private boolean _guessRequested;
private boolean _reseedRequested;
private boolean _saveRequested;
private boolean _recheckReachabilityRequested;
@ -38,6 +37,8 @@ public class ConfigNetHandler extends FormHandler {
private boolean _requireIntroductions;
private boolean _hiddenMode;
private boolean _dynamicKeys;
private String _ntcpHostname;
private String _ntcpPort;
private String _tcpPort;
private String _udpPort;
private String _inboundRate;
@ -52,9 +53,7 @@ public class ConfigNetHandler extends FormHandler {
private boolean _ratesOnly;
protected void processForm() {
if (_guessRequested) {
guessHostname();
} else if (_reseedRequested) {
if (_reseedRequested) {
reseed();
} else if (_saveRequested || ( (_action != null) && ("Save changes".equals(_action)) )) {
saveChanges();
@ -65,7 +64,6 @@ public class ConfigNetHandler extends FormHandler {
}
}
public void setGuesshost(String moo) { _guessRequested = true; }
public void setReseed(String moo) { _reseedRequested = true; }
public void setSave(String moo) { _saveRequested = true; }
public void setEnabletimesync(String moo) { _timeSyncEnabled = true; }
@ -82,6 +80,12 @@ public class ConfigNetHandler extends FormHandler {
public void setTcpPort(String port) {
_tcpPort = (port != null ? port.trim() : null);
}
public void setNtcphost(String host) {
_ntcpHostname = (host != null ? host.trim() : null);
}
public void setNtcpport(String port) {
_ntcpPort = (port != null ? port.trim() : null);
}
public void setUdpPort(String port) {
_udpPort = (port != null ? port.trim() : null);
}
@ -110,38 +114,8 @@ public class ConfigNetHandler extends FormHandler {
_sharePct = (pct != null ? pct.trim() : null);
}
private static final String IP_PREFIX = "<h1>Your IP is ";
private static final String IP_SUFFIX = " <br></h1>";
private void guessHostname() {
BufferedReader reader = null;
try {
URL url = new URL("http://www.whatismyip.com/");
URLConnection con = url.openConnection();
con.connect();
reader = new BufferedReader(new InputStreamReader(con.getInputStream()));
String line = null;
while ( (line = reader.readLine()) != null) {
if (line.startsWith(IP_PREFIX)) {
int end = line.indexOf(IP_SUFFIX);
if (end == -1) {
addFormError("Unable to guess the host (BAD_SUFFIX)");
return;
}
String ip = line.substring(IP_PREFIX.length(), end);
addFormNotice("Host guess: " + ip);
return;
}
}
addFormError("Unable to guess the host (NO_PREFIX)");
} catch (IOException ioe) {
addFormError("Unable to guess the host (IO_ERROR)");
_context.logManager().getLog(ConfigNetHandler.class).error("Unable to guess the host", ioe);
} finally {
if (reader != null) try { reader.close(); } catch (IOException ioe) {}
}
}
private static final String DEFAULT_SEED_URL = "http://dev.i2p.net/i2pdb/";
private static final String DEFAULT_SEED_URL = ReseedHandler.DEFAULT_SEED_URL;
/**
* Reseed has been requested, so lets go ahead and do it. Fetch all of
* the routerInfo-*.dat files from the specified URL (or the default) and
@ -256,6 +230,30 @@ public class ConfigNetHandler extends FormHandler {
restartRequired = true;
}
}
if ( (_ntcpHostname != null) && (_ntcpHostname.length() > 0) && (_ntcpPort != null) && (_ntcpPort.length() > 0) ) {
String oldHost = _context.router().getConfigSetting(ConfigNetHelper.PROP_I2NP_NTCP_HOSTNAME);
String oldPort = _context.router().getConfigSetting(ConfigNetHelper.PROP_I2NP_NTCP_PORT);
if ( (oldHost == null) || (!oldHost.equalsIgnoreCase(_ntcpHostname)) ||
(oldPort == null) || (!oldPort.equalsIgnoreCase(_ntcpPort)) ) {
_context.router().setConfigSetting(ConfigNetHelper.PROP_I2NP_NTCP_HOSTNAME, _ntcpHostname);
_context.router().setConfigSetting(ConfigNetHelper.PROP_I2NP_NTCP_PORT, _ntcpPort);
addFormNotice("Updating inbound TCP settings from " + oldHost + ":" + oldPort
+ " to " + _ntcpHostname + ":" + _ntcpPort);
restartRequired = true;
}
} else {
String oldHost = _context.router().getConfigSetting(ConfigNetHelper.PROP_I2NP_NTCP_HOSTNAME);
String oldPort = _context.router().getConfigSetting(ConfigNetHelper.PROP_I2NP_NTCP_PORT);
if ( (oldHost != null) || (oldPort != null) ) {
_context.router().removeConfigSetting(ConfigNetHelper.PROP_I2NP_NTCP_HOSTNAME);
_context.router().removeConfigSetting(ConfigNetHelper.PROP_I2NP_NTCP_PORT);
addFormNotice("Updating inbound TCP settings from " + oldHost + ":" + oldPort
+ " so that we no longer receive inbound TCP connections");
restartRequired = true;
}
}
if ( (_udpPort != null) && (_udpPort.length() > 0) ) {
String oldPort = _context.router().getConfigSetting(ConfigNetHelper.PROP_I2NP_UDP_PORT);
if ( (oldPort == null) && (_udpPort.equals("8887")) ) {
@ -284,7 +282,7 @@ public class ConfigNetHandler extends FormHandler {
// If hidden mode value changes, restart is required
if (_hiddenMode && "false".equalsIgnoreCase(_context.getProperty(Router.PROP_HIDDEN, "false"))) {
_context.router().setConfigSetting(Router.PROP_HIDDEN, "true");
_context.router().getRouterInfo().addCapability(RouterInfo.CAPABILITY_HIDDEN);
_context.router().addCapabilities(_context.router().getRouterInfo());
addFormNotice("Gracefully restarting into Hidden Router Mode. Make sure you have no 0-1 length "
+ "<a href=\"configtunnels.jsp\">tunnels!</a>");
hiddenSwitch();

View File

@ -48,6 +48,10 @@ public class ConfigNetHelper {
}
return "" + port;
}
public final static String PROP_I2NP_NTCP_HOSTNAME = "i2np.ntcp.hostname";
public final static String PROP_I2NP_NTCP_PORT = "i2np.ntcp.port";
public String getNtcphostname() { return _context.getProperty(PROP_I2NP_NTCP_HOSTNAME); }
public String getNtcpport() { return _context.getProperty(PROP_I2NP_NTCP_PORT); }
public String getUdpAddress() {
RouterAddress addr = _context.router().getRouterInfo().getTargetAddress("SSU");

View File

@ -0,0 +1,75 @@
package net.i2p.router.web;
import net.i2p.data.DataHelper;
import net.i2p.router.Router;
import net.i2p.router.RouterContext;
import net.i2p.router.web.ConfigServiceHandler.UpdateWrapperManagerTask;
import net.i2p.router.web.ConfigServiceHandler.UpdateWrapperManagerAndRekeyTask;
/**
* simple helper to control restarts/shutdowns in the left hand nav
*
*/
public class ConfigRestartBean {
public static String getNonce() {
RouterContext ctx = ContextHelper.getContext(null);
String nonce = System.getProperty("console.nonce");
if (nonce == null) {
nonce = ""+ctx.random().nextLong();
System.setProperty("console.nonce", nonce);
}
return nonce;
}
public static String renderStatus(String urlBase, String action, String nonce) {
RouterContext ctx = ContextHelper.getContext(null);
String systemNonce = getNonce();
if ( (nonce != null) && (systemNonce.equals(nonce)) && (action != null) ) {
if ("shutdownImmediate".equals(action)) {
ctx.router().addShutdownTask(new ConfigServiceHandler.UpdateWrapperManagerTask(Router.EXIT_HARD));
ctx.router().shutdown(Router.EXIT_HARD); // never returns
} else if ("cancelShutdown".equals(action)) {
ctx.router().cancelGracefulShutdown();
} else if ("restartImmediate".equals(action)) {
ctx.router().addShutdownTask(new ConfigServiceHandler.UpdateWrapperManagerTask(Router.EXIT_HARD_RESTART));
ctx.router().shutdown(Router.EXIT_HARD_RESTART); // never returns
} else if ("restart".equals(action)) {
ctx.router().addShutdownTask(new ConfigServiceHandler.UpdateWrapperManagerTask(Router.EXIT_GRACEFUL_RESTART));
ctx.router().shutdownGracefully(Router.EXIT_GRACEFUL_RESTART);
} else if ("shutdown".equals(action)) {
ctx.router().addShutdownTask(new ConfigServiceHandler.UpdateWrapperManagerTask(Router.EXIT_GRACEFUL));
ctx.router().shutdownGracefully();
}
}
boolean shuttingDown = isShuttingDown(ctx);
boolean restarting = isRestarting(ctx);
long timeRemaining = ctx.router().getShutdownTimeRemaining();
if (shuttingDown) {
if (timeRemaining <= 0) {
return "<b>Shutdown imminent</b>";
} else {
return "<b>Shutdown in " + DataHelper.formatDuration(timeRemaining) + "</b><br />"
+ "<a href=\"" + urlBase + "?consoleNonce=" + systemNonce + "&amp;action=shutdownImmediate\">Shutdown immediately</a><br />"
+ "<a href=\"" + urlBase + "?consoleNonce=" + systemNonce + "&amp;action=cancelShutdown\">Cancel shutdown</a> ";
}
} else if (restarting) {
if (timeRemaining <= 0) {
return "<b>Restart imminent</b>";
} else {
return "<b>Restart in " + DataHelper.formatDuration(timeRemaining) + "</b><br />"
+ "<a href=\"" + urlBase + "?consoleNonce=" + systemNonce + "&amp;action=restartImmediate\">Restart immediately</a><br />"
+ "<a href=\"" + urlBase + "?consoleNonce=" + systemNonce + "&amp;action=cancelShutdown\">Cancel restart</a> ";
}
} else {
return "<a href=\"" + urlBase + "?consoleNonce=" + systemNonce + "&amp;action=restart\" title=\"Graceful restart\">Restart</a> "
+ "<a href=\"" + urlBase + "?consoleNonce=" + systemNonce + "&amp;action=shutdown\" title=\"Graceful shutdown\">Shutdown</a>";
}
}
private static boolean isShuttingDown(RouterContext ctx) {
return Router.EXIT_GRACEFUL == ctx.router().scheduledGracefulExitCode();
}
private static boolean isRestarting(RouterContext ctx) {
return Router.EXIT_GRACEFUL_RESTART == ctx.router().scheduledGracefulExitCode();
}
}

View File

@ -114,7 +114,7 @@ public class FormHandler {
return;
}
if (_nonce == null) {
addFormError("You trying to mess with me? Huh? Are you?");
//addFormError("You trying to mess with me? Huh? Are you?");
_valid = false;
return;
}

View File

@ -0,0 +1,122 @@
package net.i2p.router.web;
import java.io.IOException;
import java.io.Writer;
import java.util.*;
import net.i2p.data.DataHelper;
import net.i2p.stat.Rate;
import net.i2p.router.RouterContext;
public class GraphHelper {
private RouterContext _context;
private Writer _out;
private int _periodCount;
private boolean _showEvents;
private int _width;
private int _height;
private int _refreshDelaySeconds;
/**
* Configure this bean to query a particular router context
*
* @param contextId begging few characters of the routerHash, or null to pick
* the first one we come across.
*/
public void setContextId(String contextId) {
try {
_context = ContextHelper.getContext(contextId);
} catch (Throwable t) {
t.printStackTrace();
}
}
public GraphHelper() {
_periodCount = 60; // SummaryListener.PERIODS;
_showEvents = false;
_width = 250;
_height = 100;
_refreshDelaySeconds = 60;
}
public void setOut(Writer out) { _out = out; }
public void setPeriodCount(String str) {
try { _periodCount = Integer.parseInt(str); } catch (NumberFormatException nfe) {}
}
public void setShowEvents(boolean b) { _showEvents = b; }
public void setHeight(String str) {
try { _height = Integer.parseInt(str); } catch (NumberFormatException nfe) {}
}
public void setWidth(String str) {
try { _width = Integer.parseInt(str); } catch (NumberFormatException nfe) {}
}
public void setRefreshDelay(String str) {
try { _refreshDelaySeconds = Integer.parseInt(str); } catch (NumberFormatException nfe) {}
}
public String getImages() {
try {
_out.write("<img src=\"viewstat.jsp?stat=bw.combined"
+ "&amp;periodCount=" + _periodCount
+ "&amp;width=" + _width
+ "&amp;height=" + _height
+ "\" title=\"Combined bandwidth graph\" />\n");
List listeners = StatSummarizer.instance().getListeners();
TreeSet ordered = new TreeSet(new AlphaComparator());
ordered.addAll(listeners);
for (Iterator iter = ordered.iterator(); iter.hasNext(); ) {
SummaryListener lsnr = (SummaryListener)iter.next();
Rate r = lsnr.getRate();
String title = r.getRateStat().getName() + " for " + DataHelper.formatDuration(_periodCount * r.getPeriod());
_out.write("<img src=\"viewstat.jsp?stat=" + r.getRateStat().getName()
+ "&amp;showEvents=" + _showEvents
+ "&amp;period=" + r.getPeriod()
+ "&amp;periodCount=" + _periodCount
+ "&amp;width=" + _width
+ "&amp;height=" + _height
+ "\" title=\"" + title + "\" />\n");
}
if (_refreshDelaySeconds > 0)
_out.write("<meta http-equiv=\"refresh\" content=\"" + _refreshDelaySeconds + "\" />\n");
} catch (IOException ioe) {
ioe.printStackTrace();
}
return "";
}
public String getForm() {
try {
_out.write("<form action=\"graphs.jsp\" method=\"GET\">");
_out.write("Periods: <input size=\"3\" type=\"text\" name=\"periodCount\" value=\"" + _periodCount + "\" /><br />\n");
_out.write("Plot averages: <input type=\"radio\" name=\"showEvents\" value=\"false\" " + (_showEvents ? "" : "checked=\"true\" ") + " /> ");
_out.write("or plot events: <input type=\"radio\" name=\"showEvents\" value=\"true\" "+ (_showEvents ? "checked=\"true\" " : "") + " /><br />\n");
_out.write("Image sizes: width: <input size=\"4\" type=\"text\" name=\"width\" value=\"" + _width
+ "\" /> pixels, height: <input size=\"4\" type=\"text\" name=\"height\" value=\"" + _height
+ "\" /><br />\n");
_out.write("Refresh delay: <select name=\"refreshDelay\"><option value=\"60\">1 minute</option><option value=\"120\">2 minutes</option><option value=\"300\">5 minutes</option><option value=\"600\">10 minutes</option><option value=\"-1\">Never</option></select><br />\n");
_out.write("<input type=\"submit\" value=\"Redraw\" />");
} catch (IOException ioe) {
ioe.printStackTrace();
}
return "";
}
public String getPeerSummary() {
try {
_context.commSystem().renderStatusHTML(_out);
_context.bandwidthLimiter().renderStatusHTML(_out);
} catch (IOException ioe) {
ioe.printStackTrace();
}
return "";
}
}
class AlphaComparator implements Comparator {
public int compare(Object lhs, Object rhs) {
SummaryListener l = (SummaryListener)lhs;
SummaryListener r = (SummaryListener)rhs;
String lName = l.getRate().getRateStat().getName() + "." + l.getRate().getPeriod();
String rName = r.getRate().getRateStat().getName() + "." + r.getRate().getPeriod();
return lName.compareTo(rName);
}
}

View File

@ -24,6 +24,7 @@ public class NoticeHelper {
}
public String getSystemNotice() {
if (true) return ""; // moved to the left hand nav
if (_context.router().gracefulShutdownInProgress()) {
long remaining = _context.router().getShutdownTimeRemaining();
if (remaining > 0)

View File

@ -8,6 +8,8 @@ import net.i2p.router.RouterContext;
public class PeerHelper {
private RouterContext _context;
private Writer _out;
private int _sortFlags;
private String _urlBase;
/**
* Configure this bean to query a particular router context
*
@ -25,10 +27,22 @@ public class PeerHelper {
public PeerHelper() {}
public void setOut(Writer out) { _out = out; }
public void setSort(String flags) {
if (flags != null) {
try {
_sortFlags = Integer.parseInt(flags);
} catch (NumberFormatException nfe) {
_sortFlags = 0;
}
} else {
_sortFlags = 0;
}
}
public void setUrlBase(String base) { _urlBase = base; }
public String getPeerSummary() {
try {
_context.commSystem().renderStatusHTML(_out);
_context.commSystem().renderStatusHTML(_out, _urlBase, _sortFlags);
_context.bandwidthLimiter().renderStatusHTML(_out);
} catch (IOException ioe) {
ioe.printStackTrace();

View File

@ -17,7 +17,7 @@ import net.i2p.util.I2PThread;
/**
* Handler to deal with reseed requests. This reseed from the URL
* http://dev.i2p.net/i2pdb/ unless the java env property "i2p.reseedURL" is
* http://dev.i2p.net/i2pdb2/ unless the java env property "i2p.reseedURL" is
* set. It always writes to ./netDb/, so don't mess with that.
*
*/
@ -58,7 +58,7 @@ public class ReseedHandler {
}
}
private static final String DEFAULT_SEED_URL = "http://dev.i2p.net/i2pdb/";
static final String DEFAULT_SEED_URL = "http://dev.i2p.net/i2pdb2/";
/**
* Reseed has been requested, so lets go ahead and do it. Fetch all of
* the routerInfo-*.dat files from the specified URL (or the default) and
@ -71,7 +71,9 @@ public class ReseedHandler {
seedURL = DEFAULT_SEED_URL;
try {
URL dir = new URL(seedURL);
String content = new String(readURL(dir));
byte contentRaw[] = readURL(dir);
if (contentRaw == null) return;
String content = new String(contentRaw);
Set urls = new HashSet();
int cur = 0;
while (true) {

View File

@ -25,6 +25,7 @@ public class RouterConsoleRunner {
static {
System.setProperty("org.mortbay.http.Version.paranoid", "true");
System.setProperty("java.awt.headless", "true");
}
public RouterConsoleRunner(String args[]) {
@ -95,6 +96,10 @@ public class RouterConsoleRunner {
I2PThread t = new I2PThread(fetcher, "NewsFetcher");
t.setDaemon(true);
t.start();
I2PThread st = new I2PThread(new StatSummarizer(), "StatSummarizer");
st.setDaemon(true);
st.start();
}
private void initialize(WebApplicationContext context) {

View File

@ -0,0 +1,234 @@
package net.i2p.router.web;
import java.io.*;
import java.util.*;
import net.i2p.stat.*;
import net.i2p.router.*;
import net.i2p.util.Log;
import java.awt.Color;
import org.jrobin.graph.RrdGraph;
import org.jrobin.graph.RrdGraphDef;
import org.jrobin.graph.RrdGraphDefTemplate;
import org.jrobin.core.RrdException;
/**
*
*/
public class StatSummarizer implements Runnable {
private RouterContext _context;
private Log _log;
/** list of SummaryListener instances */
private List _listeners;
private static StatSummarizer _instance;
public StatSummarizer() {
_context = (RouterContext)RouterContext.listContexts().get(0); // fuck it, only summarize one per jvm
_log = _context.logManager().getLog(getClass());
_listeners = new ArrayList(16);
_instance = this;
}
public static StatSummarizer instance() { return _instance; }
public void run() {
String specs = "";
while (_context.router().isAlive()) {
specs = adjustDatabases(specs);
try { Thread.sleep(60*1000); } catch (InterruptedException ie) {}
}
}
/** list of SummaryListener instances */
List getListeners() { return _listeners; }
private static final String DEFAULT_DATABASES = "bw.sendRate.60000" +
",bw.recvRate.60000" +
",tunnel.testSuccessTime.60000" +
",udp.outboundActiveCount.60000" +
",udp.receivePacketSize.60000" +
",udp.receivePacketSkew.60000" +
",udp.sendConfirmTime.60000" +
",udp.sendPacketSize.60000" +
",router.activePeers.60000" +
",router.activeSendPeers.60000" +
",tunnel.acceptLoad.60000" +
",tunnel.dropLoadProactive.60000" +
",tunnel.buildExploratorySuccess.60000" +
",tunnel.buildExploratoryReject.60000" +
",tunnel.buildExploratoryExpire.60000" +
",client.sendAckTime.60000" +
",client.dispatchNoACK.60000" +
",ntcp.sendTime.60000" +
",ntcp.transmitTime.60000" +
",ntcp.sendBacklogTime.60000" +
",ntcp.receiveTime.60000" +
",transport.sendMessageFailureLifetime.60000" +
",transport.sendProcessingTime.60000";
private String adjustDatabases(String oldSpecs) {
String spec = _context.getProperty("stat.summaries", DEFAULT_DATABASES);
if ( ( (spec == null) && (oldSpecs == null) ) ||
( (spec != null) && (oldSpecs != null) && (oldSpecs.equals(spec))) )
return oldSpecs;
List old = parseSpecs(oldSpecs);
List newSpecs = parseSpecs(spec);
// remove old ones
for (int i = 0; i < old.size(); i++) {
Rate r = (Rate)old.get(i);
if (!newSpecs.contains(r))
removeDb(r);
}
// add new ones
StringBuffer buf = new StringBuffer();
for (int i = 0; i < newSpecs.size(); i++) {
Rate r = (Rate)newSpecs.get(i);
if (!old.contains(r))
addDb(r);
buf.append(r.getRateStat().getName()).append(".").append(r.getPeriod());
if (i + 1 < newSpecs.size())
buf.append(',');
}
return buf.toString();
}
private void removeDb(Rate r) {
for (int i = 0; i < _listeners.size(); i++) {
SummaryListener lsnr = (SummaryListener)_listeners.get(i);
if (lsnr.getRate().equals(r)) {
_listeners.remove(i);
lsnr.stopListening();
return;
}
}
}
private void addDb(Rate r) {
SummaryListener lsnr = new SummaryListener(r);
_listeners.add(lsnr);
lsnr.startListening();
//System.out.println("Start listening for " + r.getRateStat().getName() + ": " + r.getPeriod());
}
public boolean renderPng(Rate rate, OutputStream out) throws IOException {
return renderPng(rate, out, -1, -1, false, false, false, false, -1, true);
}
public boolean renderPng(Rate rate, OutputStream out, int width, int height, boolean hideLegend, boolean hideGrid, boolean hideTitle, boolean showEvents, int periodCount, boolean showCredit) throws IOException {
for (int i = 0; i < _listeners.size(); i++) {
SummaryListener lsnr = (SummaryListener)_listeners.get(i);
if (lsnr.getRate().equals(rate)) {
lsnr.renderPng(out, width, height, hideLegend, hideGrid, hideTitle, showEvents, periodCount, showCredit);
return true;
}
}
return false;
}
public boolean renderPng(OutputStream out, String templateFilename) throws IOException {
SummaryRenderer.render(_context, out, templateFilename);
return true;
}
public boolean getXML(Rate rate, OutputStream out) throws IOException {
for (int i = 0; i < _listeners.size(); i++) {
SummaryListener lsnr = (SummaryListener)_listeners.get(i);
if (lsnr.getRate().equals(rate)) {
lsnr.getData().exportXml(out);
out.write(("<!-- Rate: " + lsnr.getRate().getRateStat().getName() + " for period " + lsnr.getRate().getPeriod() + " -->\n").getBytes());
out.write(("<!-- Average data soure name: " + lsnr.getName() + " event count data source name: " + lsnr.getEventName() + " -->\n").getBytes());
return true;
}
}
return false;
}
public boolean renderRatePng(OutputStream out, int width, int height, boolean hideLegend, boolean hideGrid, boolean hideTitle, boolean showEvents, int periodCount, boolean showCredit) throws IOException {
long end = _context.clock().now();
if (periodCount <= 0) periodCount = SummaryListener.PERIODS;
if (periodCount > SummaryListener.PERIODS)
periodCount = SummaryListener.PERIODS;
long period = 60*1000;
long start = end - period*periodCount;
long begin = System.currentTimeMillis();
try {
RrdGraphDef def = new RrdGraphDef();
def.setTimePeriod(start/1000, end/1000);
String title = "Bandwidth usage";
if (!hideTitle)
def.setTitle(title);
String sendName = SummaryListener.createName(_context, "bw.sendRate.60000");
String recvName = SummaryListener.createName(_context, "bw.recvRate.60000");
def.datasource(sendName, sendName, sendName, "AVERAGE", "MEMORY");
def.datasource(recvName, recvName, recvName, "AVERAGE", "MEMORY");
def.area(sendName, Color.BLUE, "Outbound bytes/second");
//def.line(sendName, Color.BLUE, "Outbound bytes/second", 3);
//def.line(recvName, Color.RED, "Inbound bytes/second@r", 3);
def.area(recvName, Color.RED, "Inbound bytes/second@r");
if (!hideLegend) {
def.gprint(sendName, "AVERAGE", "outbound average: @2@sbytes/second");
def.gprint(sendName, "MAX", " max: @2@sbytes/second@r");
def.gprint(recvName, "AVERAGE", "inbound average: @2bytes/second@s");
def.gprint(recvName, "MAX", " max: @2@sbytes/second@r");
}
if (!showCredit)
def.setShowSignature(false);
if (hideLegend)
def.setShowLegend(false);
if (hideGrid) {
def.setGridX(false);
def.setGridY(false);
}
//System.out.println("rendering: path=" + path + " dsNames[0]=" + dsNames[0] + " dsNames[1]=" + dsNames[1] + " lsnr.getName=" + _listener.getName());
def.setAntiAliasing(false);
//System.out.println("Rendering: \n" + def.exportXmlTemplate());
//System.out.println("*****************\nData: \n" + _listener.getData().dump());
RrdGraph graph = new RrdGraph(def);
//System.out.println("Graph created");
byte data[] = null;
if ( (width <= 0) || (height <= 0) )
data = graph.getPNGBytes();
else
data = graph.getPNGBytes(width, height);
long timeToPlot = System.currentTimeMillis() - begin;
out.write(data);
//File t = File.createTempFile("jrobinData", ".xml");
//_listener.getData().dumpXml(new FileOutputStream(t));
//System.out.println("plotted: " + (data != null ? data.length : 0) + " bytes in " + timeToPlot
// ); // + ", data written to " + t.getAbsolutePath());
return true;
} catch (RrdException re) {
_log.error("Error rendering", re);
throw new IOException("Error plotting: " + re.getMessage());
} catch (IOException ioe) {
_log.error("Error rendering", ioe);
throw ioe;
}
}
/**
* @param specs statName.period,statName.period,statName.period
* @return list of Rate objects
*/
private List parseSpecs(String specs) {
StringTokenizer tok = new StringTokenizer(specs, ",");
List rv = new ArrayList();
while (tok.hasMoreTokens()) {
String spec = tok.nextToken();
int split = spec.lastIndexOf('.');
if ( (split <= 0) || (split + 1 >= spec.length()) )
continue;
String name = spec.substring(0, split);
String per = spec.substring(split+1);
long period = -1;
try {
period = Long.parseLong(per);
RateStat rs = _context.statManager().getRate(name);
if (rs != null) {
Rate r = rs.getRate(period);
if (r != null)
rv.add(r);
}
} catch (NumberFormatException nfe) {}
}
return rv;
}
}

View File

@ -107,7 +107,8 @@ public class SummaryHelper {
}
public boolean allowReseed() {
return (_context.netDb().getKnownRouters() < 30);
return (_context.netDb().getKnownRouters() < 30) ||
Boolean.valueOf(_context.getProperty("i2p.alwaysAllowReseed", "false")).booleanValue();
}
public int getAllPeers() { return _context.netDb().getKnownRouters(); }
@ -213,11 +214,11 @@ public class SummaryHelper {
}
/**
* How fast we have been receiving data over the last minute (pretty printed
* How fast we have been receiving data over the last second (pretty printed
* string with 2 decimal places representing the KBps)
*
*/
public String getInboundMinuteKBps() {
public String getInboundSecondKBps() {
if (_context == null)
return "0.0";
double kbps = _context.bandwidthLimiter().getReceiveBps()/1024d;
@ -225,11 +226,11 @@ public class SummaryHelper {
return fmt.format(kbps);
}
/**
* How fast we have been sending data over the last minute (pretty printed
* How fast we have been sending data over the last second (pretty printed
* string with 2 decimal places representing the KBps)
*
*/
public String getOutboundMinuteKBps() {
public String getOutboundSecondKBps() {
if (_context == null)
return "0.0";
double kbps = _context.bandwidthLimiter().getSendBps()/1024d;
@ -493,6 +494,13 @@ public class SummaryHelper {
return _context.throttle().getTunnelLag() + "ms";
}
public String getInboundBacklog() {
if (_context == null)
return "0";
return String.valueOf(_context.tunnelManager().getInboundBuildQueueSize());
}
public boolean updateAvailable() {
return NewsFetcher.getInstance(_context).updateAvailable();
}

View File

@ -0,0 +1,250 @@
package net.i2p.router.web;
import java.io.*;
import net.i2p.I2PAppContext;
import net.i2p.data.DataHelper;
import net.i2p.stat.Rate;
import net.i2p.stat.RateStat;
import net.i2p.stat.RateSummaryListener;
import net.i2p.util.Log;
import org.jrobin.core.RrdDb;
import org.jrobin.core.RrdDef;
import org.jrobin.core.RrdBackendFactory;
import org.jrobin.core.RrdMemoryBackendFactory;
import org.jrobin.core.Sample;
import java.awt.Color;
import org.jrobin.graph.RrdGraph;
import org.jrobin.graph.RrdGraphDef;
import org.jrobin.graph.RrdGraphDefTemplate;
import org.jrobin.core.RrdException;
class SummaryListener implements RateSummaryListener {
private I2PAppContext _context;
private Log _log;
private Rate _rate;
private String _name;
private String _eventName;
private RrdDb _db;
private Sample _sample;
private RrdMemoryBackendFactory _factory;
private SummaryRenderer _renderer;
static final int PERIODS = 1440;
static {
try {
RrdBackendFactory.setDefaultFactory("MEMORY");
} catch (RrdException re) {
re.printStackTrace();
}
}
public SummaryListener(Rate r) {
_context = I2PAppContext.getGlobalContext();
_rate = r;
_log = _context.logManager().getLog(SummaryListener.class);
}
public void add(double totalValue, long eventCount, double totalEventTime, long period) {
long now = now();
long when = now / 1000;
//System.out.println("add to " + getRate().getRateStat().getName() + " on " + System.currentTimeMillis() + " / " + now + " / " + when);
if (_db != null) {
// add one value to the db (the average value for the period)
try {
_sample.setTime(when);
double val = eventCount > 0 ? (totalValue / (double)eventCount) : 0d;
_sample.setValue(_name, val);
_sample.setValue(_eventName, eventCount);
//_sample.setValue(0, val);
//_sample.setValue(1, eventCount);
_sample.update();
//String names[] = _sample.getDsNames();
//System.out.println("Add " + val + " over " + eventCount + " for " + _name
// + " [" + names[0] + ", " + names[1] + "]");
} catch (IOException ioe) {
_log.error("Error adding", ioe);
} catch (RrdException re) {
_log.error("Error adding", re);
}
}
}
/**
* JRobin can only deal with 20 character data source names, so we need to create a unique,
* munged version from the user/developer-visible name.
*
*/
static String createName(I2PAppContext ctx, String wanted) {
return ctx.sha().calculateHash(DataHelper.getUTF8(wanted)).toBase64().substring(0,20);
}
public Rate getRate() { return _rate; }
public void startListening() {
RateStat rs = _rate.getRateStat();
long period = _rate.getPeriod();
String baseName = rs.getName() + "." + period;
_name = createName(_context, baseName);
_eventName = createName(_context, baseName + ".events");
try {
RrdDef def = new RrdDef(_name, now()/1000, period/1000);
// for info on the heartbeat, xff, steps, etc, see the rrdcreate man page, aka
// http://www.jrobin.org/support/man/rrdcreate.html
long heartbeat = period*10/1000;
def.addDatasource(_name, "GAUGE", heartbeat, Double.NaN, Double.NaN);
def.addDatasource(_eventName, "GAUGE", heartbeat, 0, Double.NaN);
double xff = 0.9;
int steps = 1;
int rows = PERIODS;
def.addArchive("AVERAGE", xff, steps, rows);
_factory = (RrdMemoryBackendFactory)RrdBackendFactory.getDefaultFactory();
_db = new RrdDb(def, _factory);
_sample = _db.createSample();
_renderer = new SummaryRenderer(_context, this);
_rate.setSummaryListener(this);
} catch (RrdException re) {
_log.error("Error starting", re);
} catch (IOException ioe) {
_log.error("Error starting", ioe);
}
}
public void stopListening() {
if (_db == null) return;
try {
_db.close();
} catch (IOException ioe) {
_log.error("Error closing", ioe);
}
_rate.setSummaryListener(null);
_factory.delete(_db.getPath());
_db = null;
}
public void renderPng(OutputStream out, int width, int height, boolean hideLegend, boolean hideGrid, boolean hideTitle, boolean showEvents, int periodCount, boolean showCredit) throws IOException {
_renderer.render(out, width, height, hideLegend, hideGrid, hideTitle, showEvents, periodCount, showCredit);
}
public void renderPng(OutputStream out) throws IOException { _renderer.render(out); }
String getName() { return _name; }
String getEventName() { return _eventName; }
RrdDb getData() { return _db; }
long now() { return _context.clock().now(); }
public boolean equals(Object obj) {
return ((obj instanceof SummaryListener) && ((SummaryListener)obj)._rate.equals(_rate));
}
public int hashCode() { return _rate.hashCode(); }
}
class SummaryRenderer {
private Log _log;
private SummaryListener _listener;
public SummaryRenderer(I2PAppContext ctx, SummaryListener lsnr) {
_log = ctx.logManager().getLog(SummaryRenderer.class);
_listener = lsnr;
}
/**
* Render the stats as determined by the specified JRobin xml config,
* but note that this doesn't work on stock jvms, as it requires
* DOM level 3 load and store support. Perhaps we can bundle that, or
* specify who can get it from where, etc.
*
*/
public static synchronized void render(I2PAppContext ctx, OutputStream out, String filename) throws IOException {
long end = ctx.clock().now();
long start = end - 60*1000*SummaryListener.PERIODS;
long begin = System.currentTimeMillis();
try {
RrdGraphDefTemplate template = new RrdGraphDefTemplate(filename);
RrdGraphDef def = template.getRrdGraphDef();
def.setTimePeriod(start/1000, end/1000); // ignore the periods in the template
RrdGraph graph = new RrdGraph(def);
byte img[] = graph.getPNGBytes();
out.write(img);
} catch (RrdException re) {
//_log.error("Error rendering " + filename, re);
throw new IOException("Error plotting: " + re.getMessage());
} catch (IOException ioe) {
//_log.error("Error rendering " + filename, ioe);
throw ioe;
}
}
public void render(OutputStream out) throws IOException { render(out, -1, -1, false, false, false, false, -1, true); }
public void render(OutputStream out, int width, int height, boolean hideLegend, boolean hideGrid, boolean hideTitle, boolean showEvents, int periodCount, boolean showCredit) throws IOException {
long end = _listener.now();
if (periodCount <= 0) periodCount = SummaryListener.PERIODS;
if (periodCount > SummaryListener.PERIODS)
periodCount = SummaryListener.PERIODS;
long start = end - _listener.getRate().getPeriod()*periodCount;
long begin = System.currentTimeMillis();
try {
RrdGraphDef def = new RrdGraphDef();
def.setTimePeriod(start/1000, end/1000);
String title = _listener.getRate().getRateStat().getName() + " averaged for "
+ DataHelper.formatDuration(_listener.getRate().getPeriod());
if (!hideTitle)
def.setTitle(title);
String path = _listener.getData().getPath();
String dsNames[] = _listener.getData().getDsNames();
String plotName = null;
String descr = null;
if (showEvents) {
// include the average event count on the plot
plotName = dsNames[1];
descr = "Events per period";
} else {
// include the average value
plotName = dsNames[0];
descr = _listener.getRate().getRateStat().getDescription();
}
def.datasource(plotName, path, plotName, "AVERAGE", "MEMORY");
def.area(plotName, Color.BLUE, descr + "@r");
if (!hideLegend) {
def.gprint(plotName, "AVERAGE", "average: @2@s");
def.gprint(plotName, "MAX", " max: @2@s@r");
}
if (!showCredit)
def.setShowSignature(false);
/*
// these four lines set up a graph plotting both values and events on the same chart
// (but with the same coordinates, so the values may look pretty skewed)
def.datasource(dsNames[0], path, dsNames[0], "AVERAGE", "MEMORY");
def.datasource(dsNames[1], path, dsNames[1], "AVERAGE", "MEMORY");
def.area(dsNames[0], Color.BLUE, _listener.getRate().getRateStat().getDescription());
def.line(dsNames[1], Color.RED, "Events per period");
*/
if (hideLegend)
def.setShowLegend(false);
if (hideGrid) {
def.setGridX(false);
def.setGridY(false);
}
//System.out.println("rendering: path=" + path + " dsNames[0]=" + dsNames[0] + " dsNames[1]=" + dsNames[1] + " lsnr.getName=" + _listener.getName());
def.setAntiAliasing(false);
//System.out.println("Rendering: \n" + def.exportXmlTemplate());
//System.out.println("*****************\nData: \n" + _listener.getData().dump());
RrdGraph graph = new RrdGraph(def);
//System.out.println("Graph created");
byte data[] = null;
if ( (width <= 0) || (height <= 0) )
data = graph.getPNGBytes();
else
data = graph.getPNGBytes(width, height);
long timeToPlot = System.currentTimeMillis() - begin;
out.write(data);
//File t = File.createTempFile("jrobinData", ".xml");
//_listener.getData().dumpXml(new FileOutputStream(t));
//System.out.println("plotted: " + (data != null ? data.length : 0) + " bytes in " + timeToPlot
// ); // + ", data written to " + t.getAbsolutePath());
} catch (RrdException re) {
_log.error("Error rendering", re);
throw new IOException("Error plotting: " + re.getMessage());
} catch (IOException ioe) {
_log.error("Error rendering", ioe);
throw ioe;
}
}
}

View File

@ -28,19 +28,6 @@
<input type="hidden" name="nonce" value="<%=System.getProperty("net.i2p.router.web.ConfigNetHandler.nonce")%>" />
<input type="hidden" name="action" value="blah" />
<b>External UDP address:</b> <i><jsp:getProperty name="nethelper" property="udpAddress" /></i><br />
<b>Require SSU introductions? </b>
<input type="checkbox" name="requireIntroductions" value="true" <jsp:getProperty name="nethelper" property="requireIntroductionsChecked" /> /><br />
<p>If you can, please poke a hole in your NAT or firewall to allow unsolicited UDP packets to reach
you on your external UDP address. If you can't, I2P now includes supports UDP hole punching
with "SSU introductions" - peers who will relay a request from someone you don't know to your
router for your router so that you can make an outbound connection to them. I2P will use these
introductions automatically if it detects that the port is not forwarded (as shown by
the <i>Status: OK (NAT)</i> line), or you can manually require them here.
Users behind symmetric NATs, such as OpenBSD's pf, are not currently supported.</p>
<input type="submit" name="recheckReachability" value="Check network reachability..." />
<hr />
<b>Bandwidth limiter</b><br />
Inbound rate:
<input name="inboundrate" type="text" size="2" value="<jsp:getProperty name="nethelper" property="inboundRate" />" /> KBps
@ -56,16 +43,43 @@
A negative rate means a default limit of 16KBytes per second.</i><br />
Bandwidth share percentage:
<jsp:getProperty name="nethelper" property="sharePercentageBox" /><br />
Sharing a higher percentage will improve your anonymity and help the network
Sharing a higher percentage will improve your anonymity and help the network<br />
<input type="submit" name="save" value="Save changes" /> <input type="reset" value="Cancel" /><br />
<hr />
<b>Enable load testing: </b>
<input type="checkbox" name="enableloadtesting" value="true" <jsp:getProperty name="nethelper" property="enableLoadTesting" /> />
<p>If enabled, your router will periodically anonymously probe some of your peers
to see what sort of throughput they can handle. This improves your router's ability
to pick faster peers, but can cost substantial bandwidth. Relevent data from the
to pick faster peers, but can cost substantial bandwidth. Relevant data from the
load testing is fed into the profiles as well as the
<a href="oldstats.jsp#test.rtt">test.rtt</a> and related stats.</p>
<hr />
<b>External UDP address:</b> <i><jsp:getProperty name="nethelper" property="udpAddress" /></i><br />
<b>Require SSU introductions? </b>
<input type="checkbox" name="requireIntroductions" value="true" <jsp:getProperty name="nethelper" property="requireIntroductionsChecked" /> /><br />
<p>If you can, please poke a hole in your NAT or firewall to allow unsolicited UDP packets to reach
you on your external UDP address. If you can't, I2P now includes supports UDP hole punching
with "SSU introductions" - peers who will relay a request from someone you don't know to your
router for your router so that you can make an outbound connection to them. I2P will use these
introductions automatically if it detects that the port is not forwarded (as shown by
the <i>Status: OK (NAT)</i> line), or you can manually require them here.
Users behind symmetric NATs, such as OpenBSD's pf, are not currently supported.</p>
<input type="submit" name="recheckReachability" value="Check network reachability..." />
<hr />
<b>Inbound TCP connection configuration:</b><br />
Externally reachable hostname or IP address:
<input name ="ntcphost" type="text" size="16" value="<jsp:getProperty name="nethelper" property="ntcphostname" />" />
(dyndns and the like are fine)<br />
Externally reachable TCP port:
<input name ="ntcpport" type="text" size="6" value="<jsp:getProperty name="nethelper" property="ntcpport" />" /><br />
<p>You do <i>not</i> need to allow inbound TCP connections - outbound connections work with no
configuration. However, if you want to receive inbound TCP connections, you <b>must</b> poke a hole
in your NAT or firewall for unsolicited TCP connections. If you specify the wrong IP address or
hostname, or do not properly configure your NAT or firewall, your network performance will degrade
substantially. When in doubt, leave the hostname and port number blank.</p>
<p><b>Note: changing this setting will terminate all of your connections and effectively
restart your router.</b>
<hr />
<b>Dynamic Router Keys: </b>
<input type="checkbox" name="dynamicKeys" value="true" <jsp:getProperty name="nethelper" property="dynamicKeysChecked" /> /><br />
<p>

View File

@ -0,0 +1,23 @@
<%@page contentType="text/html"%>
<%@page pageEncoding="UTF-8"%>
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html><head>
<title>I2P Router Console - graphs</title>
<link rel="stylesheet" href="default.css" type="text/css" />
</head><body>
<%@include file="nav.jsp" %>
<%@include file="summary.jsp" %>
<div class="main" id="main">
<jsp:useBean class="net.i2p.router.web.GraphHelper" id="graphHelper" scope="request" />
<jsp:setProperty name="graphHelper" property="*" />
<jsp:setProperty name="graphHelper" property="contextId" value="<%=(String)session.getAttribute("i2p.contextId")%>" />
<jsp:setProperty name="graphHelper" property="out" value="<%=out%>" />
<jsp:getProperty name="graphHelper" property="images" />
<jsp:getProperty name="graphHelper" property="form" />
</div>
</body>
</html>

View File

@ -23,45 +23,6 @@ if (System.getProperty("router.consoleNonce") == null) {
</div>
<div class="main" id="main">
<jsp:useBean class="net.i2p.router.web.ConfigServiceHandler" id="servicehandler" scope="request" />
<jsp:setProperty name="servicehandler" property="*" />
<jsp:setProperty name="servicehandler" property="contextId" value="<%=(String)session.getAttribute("i2p.contextId")%>" />
<font color="red"><jsp:getProperty name="servicehandler" property="errors" /></font>
<i><jsp:getProperty name="servicehandler" property="notices" /></i>
<jsp:useBean class="net.i2p.router.web.ConfigNetHandler" id="nethandler" scope="request" />
<jsp:setProperty name="nethandler" property="*" />
<jsp:setProperty name="nethandler" property="contextId" value="<%=(String)session.getAttribute("i2p.contextId")%>" />
<font color="red"><jsp:getProperty name="nethandler" property="errors" /></font>
<i><jsp:getProperty name="nethandler" property="notices" /></i>
<jsp:useBean class="net.i2p.router.web.ConfigNetHelper" id="nethelper" scope="request" />
<jsp:setProperty name="nethelper" property="contextId" value="<%=(String)session.getAttribute("i2p.contextId")%>" />
<form action="index.jsp" method="POST">
<input type="hidden" name="nonce" value="<%=System.getProperty("router.consoleNonce")%>" />
<input type="hidden" name="updateratesonly" value="true" />
<input type="hidden" name="save" value="Save changes" />
Inbound bandwidth:
<input name="inboundrate" type="text" size="2" value="<jsp:getProperty name="nethelper" property="inboundRate" />" /> KBps
bursting up to
<input name="inboundburstrate" type="text" size="2" value="<jsp:getProperty name="nethelper" property="inboundBurstRate" />" /> KBps for
<jsp:getProperty name="nethelper" property="inboundBurstFactorBox" /><br />
Outbound bandwidth:
<input name="outboundrate" type="text" size="2" value="<jsp:getProperty name="nethelper" property="outboundRate" />" /> KBps
bursting up to
<input name="outboundburstrate" type="text" size="2" value="<jsp:getProperty name="nethelper" property="outboundBurstRate" />" /> KBps for
<jsp:getProperty name="nethelper" property="outboundBurstFactorBox" /><br />
<i>KBps = kilobytes per second = 1024 bytes per second.</i>
<input type="submit" value="Save changes" name="action" />
<hr />
<input type="submit" name="action" value="Graceful restart" />
<input type="submit" name="action" value="Shutdown gracefully" />
<a href="configservice.jsp">Other shutdown/restart options</a>
<hr />
</form>
<jsp:useBean class="net.i2p.router.web.ContentHelper" id="contenthelper" scope="request" />
<jsp:setProperty name="contenthelper" property="page" value="docs/readme.html" />
<jsp:setProperty name="contenthelper" property="maxLines" value="300" />

View File

@ -33,6 +33,7 @@
<a href="netdb.jsp">NetDB</a> |
<a href="logs.jsp">Logs</a> |
<a href="jobs.jsp">Jobs</a> |
<a href="graphs.jsp">Graphs</a> |
<a href="oldstats.jsp">Stats</a> |
<a href="oldconsole.jsp">Internals</a>
<% } %>

View File

@ -14,6 +14,8 @@
<jsp:useBean class="net.i2p.router.web.PeerHelper" id="peerHelper" scope="request" />
<jsp:setProperty name="peerHelper" property="contextId" value="<%=(String)session.getAttribute("i2p.contextId")%>" />
<jsp:setProperty name="peerHelper" property="out" value="<%=out%>" />
<jsp:setProperty name="peerHelper" property="urlBase" value="peers.jsp" />
<jsp:setProperty name="peerHelper" property="sort" value="<%=request.getParameter("sort") != null ? request.getParameter("sort") : ""%>" />
<jsp:getProperty name="peerHelper" property="peerSummary" />
</div>

View File

@ -14,10 +14,10 @@
<b>Version:</b> <jsp:getProperty name="helper" property="version" /><br />
<b>Uptime:</b> <jsp:getProperty name="helper" property="uptime" /><br />
<b>Now:</b> <jsp:getProperty name="helper" property="time" /><br />
<b>Status:</b> <a href="config.jsp"><jsp:getProperty name="helper" property="reachability" /></a><br /><%
<b>Status:</b> <a href="config.jsp"><jsp:getProperty name="helper" property="reachability" /></a><%
if (helper.updateAvailable()) {
if ("true".equals(System.getProperty("net.i2p.router.web.UpdateHandler.updateInProgress", "false"))) {
out.print(update.getStatus());
out.print("<br />" + update.getStatus());
} else {
long nonce = new java.util.Random().nextLong();
String prev = System.getProperty("net.i2p.router.web.UpdateHandler.nonce");
@ -28,16 +28,18 @@
uri = uri + "&updateNonce=" + nonce;
else
uri = uri + "?updateNonce=" + nonce;
out.print(" <a href=\"" + uri + "\">Update available</a>");
out.print("<br /><a href=\"" + uri + "\">Update available</a>");
}
}
%><hr />
%>
<br /><%=net.i2p.router.web.ConfigRestartBean.renderStatus(request.getRequestURI(), request.getParameter("action"), request.getParameter("consoleNonce"))%>
<hr />
<u><b><a href="peers.jsp">Peers</a></b></u><br />
<b>Active:</b> <jsp:getProperty name="helper" property="activePeers" />/<jsp:getProperty name="helper" property="activeProfiles" /><br />
<b>Fast:</b> <jsp:getProperty name="helper" property="fastPeers" /><br />
<b>High capacity:</b> <jsp:getProperty name="helper" property="highCapacityPeers" /><br />
<b>Well integrated:</b> <jsp:getProperty name="helper" property="wellIntegratedPeers" /><br />
<!-- <b>Well integrated:</b> <jsp:getProperty name="helper" property="wellIntegratedPeers" /><br /> -->
<b>Failing:</b> <jsp:getProperty name="helper" property="failingPeers" /><br />
<!-- <b>Shitlisted:</b> <jsp:getProperty name="helper" property="shitlistedPeers" /><br /> -->
<b>Known:</b> <jsp:getProperty name="helper" property="allPeers" /><br /><%
@ -62,8 +64,8 @@
}
%><hr />
<u><b>Bandwidth in/out</b></u><br />
<b>1s:</b> <jsp:getProperty name="helper" property="inboundMinuteKBps" />/<jsp:getProperty name="helper" property="outboundMinuteKBps" />KBps<br />
<u><b><a href="config.jsp" title="Configure the bandwidth limits">Bandwidth in/out</a></b></u><br />
<b>1s:</b> <jsp:getProperty name="helper" property="inboundSecondKBps" />/<jsp:getProperty name="helper" property="outboundSecondKBps" />KBps<br />
<b>5m:</b> <jsp:getProperty name="helper" property="inboundFiveMinuteKBps" />/<jsp:getProperty name="helper" property="outboundFiveMinuteKBps" />KBps<br />
<b>Total:</b> <jsp:getProperty name="helper" property="inboundLifetimeKBps" />/<jsp:getProperty name="helper" property="outboundLifetimeKBps" />KBps<br />
<b>Used:</b> <jsp:getProperty name="helper" property="inboundTransferred" />/<jsp:getProperty name="helper" property="outboundTransferred" /><br />
@ -81,6 +83,7 @@
<b>Job lag:</b> <jsp:getProperty name="helper" property="jobLag" /><br />
<b>Message delay:</b> <jsp:getProperty name="helper" property="messageDelay" /><br />
<b>Tunnel lag:</b> <jsp:getProperty name="helper" property="tunnelLag" /><br />
<b>Handle backlog:</b> <jsp:getProperty name="helper" property="inboundBacklog" /><br />
<hr />
</div>

View File

@ -0,0 +1,63 @@
<%
boolean rendered = false;
String templateFile = request.getParameter("template");
if (templateFile != null) {
java.io.OutputStream cout = response.getOutputStream();
response.setContentType("image/png");
rendered = net.i2p.router.web.StatSummarizer.instance().renderPng(cout, templateFile);
}
net.i2p.stat.Rate rate = null;
String stat = request.getParameter("stat");
String period = request.getParameter("period");
boolean fakeBw = (stat != null && ("bw.combined".equals(stat)));
net.i2p.stat.RateStat rs = net.i2p.I2PAppContext.getGlobalContext().statManager().getRate(stat);
if ( !rendered && ((rs != null) || fakeBw) ) {
long per = -1;
try {
if (fakeBw)
per = 60*1000;
else
per = Long.parseLong(period);
if (!fakeBw)
rate = rs.getRate(per);
if ( (rate != null) || (fakeBw) ) {
java.io.OutputStream cout = response.getOutputStream();
String format = request.getParameter("format");
if ("xml".equals(format)) {
if (!fakeBw) {
response.setContentType("text/xml");
rendered = net.i2p.router.web.StatSummarizer.instance().getXML(rate, cout);
}
} else {
response.setContentType("image/png");
int width = -1;
int height = -1;
int periodCount = -1;
String str = request.getParameter("width");
if (str != null) try { width = Integer.parseInt(str); } catch (NumberFormatException nfe) {}
str = request.getParameter("height");
if (str != null) try { height = Integer.parseInt(str); } catch (NumberFormatException nfe) {}
str = request.getParameter("periodCount");
if (str != null) try { periodCount = Integer.parseInt(str); } catch (NumberFormatException nfe) {}
boolean hideLegend = Boolean.valueOf(""+request.getParameter("hideLegend")).booleanValue();
boolean hideGrid = Boolean.valueOf(""+request.getParameter("hideGrid")).booleanValue();
boolean hideTitle = Boolean.valueOf(""+request.getParameter("hideTitle")).booleanValue();
boolean showEvents = Boolean.valueOf(""+request.getParameter("showEvents")).booleanValue();
boolean showCredit = true;
if (request.getParameter("showCredit") != null)
showCredit = Boolean.valueOf(""+request.getParameter("showCredit")).booleanValue();
if (fakeBw)
rendered = net.i2p.router.web.StatSummarizer.instance().renderRatePng(cout, width, height, hideLegend, hideGrid, hideTitle, showEvents, periodCount, showCredit);
else
rendered = net.i2p.router.web.StatSummarizer.instance().renderPng(rate, cout, width, height, hideLegend, hideGrid, hideTitle, showEvents, periodCount, showCredit);
}
if (rendered)
cout.close();
//System.out.println("Rendered period " + per + " for the stat " + stat + "? " + rendered);
}
} catch (NumberFormatException nfe) {}
}
if (!rendered) {
response.sendError(404, "That stat is not available");
}
%>

View File

@ -210,6 +210,11 @@ public class Connection {
}
}
if (packet != null) {
if (packet.isFlagSet(Packet.FLAG_RESET)) {
// sendReset takes care to prevent too-frequent RSET transmissions
sendReset();
return;
}
ResendPacketEvent evt = (ResendPacketEvent)packet.getResendEvent();
if (evt != null) {
boolean sent = evt.retransmit(false);
@ -240,9 +245,11 @@ public class Connection {
_disconnectScheduledOn = _context.clock().now();
SimpleTimer.getInstance().addEvent(new DisconnectEvent(), DISCONNECT_TIMEOUT);
}
long now = _context.clock().now();
if (_resetSentOn + 10*1000 > now) return; // don't send resets too fast
_resetSent = true;
if (_resetSentOn <= 0)
_resetSentOn = _context.clock().now();
_resetSentOn = now;
if ( (_remotePeer == null) || (_sendStreamId <= 0) ) return;
PacketLocal reply = new PacketLocal(_context, _remotePeer);
reply.setFlag(Packet.FLAG_RESET);

View File

@ -101,7 +101,7 @@ public class ConnectionOptions extends I2PSocketOptionsImpl {
setMaxWindowSize(getInt(opts, PROP_MAX_WINDOW_SIZE, Connection.MAX_WINDOW_SIZE));
setConnectDelay(getInt(opts, PROP_CONNECT_DELAY, -1));
setProfile(getInt(opts, PROP_PROFILE, PROFILE_BULK));
setMaxMessageSize(getInt(opts, PROP_MAX_MESSAGE_SIZE, 4*1024));
setMaxMessageSize(getInt(opts, PROP_MAX_MESSAGE_SIZE, 960)); // 960 fits inside a single tunnel message
setRTT(getInt(opts, PROP_INITIAL_RTT, 10*1000));
setReceiveWindow(getInt(opts, PROP_INITIAL_RECEIVE_WINDOW, 1));
setResendDelay(getInt(opts, PROP_INITIAL_RESEND_DELAY, 1000));

View File

@ -372,8 +372,6 @@ public class MessageOutputStream extends OutputStream {
/**
* called whenever the engine wants to push more data to the
* peer
*
* @return true if the data was flushed
*/
void flushAvailable(DataReceiver target) throws IOException {
flushAvailable(target, true);

View File

@ -102,7 +102,8 @@ public class PacketHandler {
Connection con = (sendId > 0 ? _manager.getConnectionByInboundId(sendId) : null);
if (con != null) {
receiveKnownCon(con, packet);
displayPacket(packet, "RECV", "wsize " + con.getOptions().getWindowSize() + " rto " + con.getOptions().getRTO());
if (_log.shouldLog(Log.INFO))
displayPacket(packet, "RECV", "wsize " + con.getOptions().getWindowSize() + " rto " + con.getOptions().getRTO());
} else {
receiveUnknownCon(packet, sendId);
displayPacket(packet, "UNKN", null);

View File

@ -19,7 +19,7 @@
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
* $Revision: 1.8 $
* $Revision: 1.1 $
*/
package i2p.susi.webmail.pop3;
@ -373,8 +373,7 @@ public class POP3MailBox {
}
if (socket != null) {
try {
if (sendCmd1a("")
&& sendCmd1a("USER " + user)
if (sendCmd1a("USER " + user)
&& sendCmd1a("PASS " + pass)
&& sendCmd1a("STAT") ) {

View File

@ -6,6 +6,7 @@ import java.text.*;
import net.i2p.I2PAppContext;
import net.i2p.data.*;
import net.i2p.syndie.data.*;
import net.i2p.util.FileUtil;
import net.i2p.util.Log;
/**
@ -27,6 +28,7 @@ import net.i2p.util.Log;
public class Archive {
private I2PAppContext _context;
private Log _log;
private BlogManager _mgr;
private File _rootDir;
private File _cacheDir;
private Map _blogInfo;
@ -42,9 +44,10 @@ public class Archive {
public boolean accept(File dir, String name) { return name.endsWith(".snd"); }
};
public Archive(I2PAppContext ctx, String rootDir, String cacheDir) {
public Archive(I2PAppContext ctx, String rootDir, String cacheDir, BlogManager mgr) {
_context = ctx;
_log = ctx.logManager().getLog(Archive.class);
_mgr = mgr;
_rootDir = new File(rootDir);
if (!_rootDir.exists())
_rootDir.mkdirs();
@ -71,6 +74,13 @@ public class Archive {
try {
fi = new FileInputStream(meta);
bi.load(fi);
if (_mgr.isBanned(bi.getKey().calculateHash())) {
fi.close();
fi = null;
_log.error("Deleting banned blog " + bi.getKey().calculateHash().toBase64());
delete(bi.getKey().calculateHash());
continue;
}
if (bi.verify(_context)) {
info.add(bi);
} else {
@ -119,6 +129,12 @@ public class Archive {
_log.warn("Not storing invalid blog " + info);
return false;
}
if (_mgr.isBanned(info.getKey().calculateHash())) {
_log.error("Not storing banned blog " + info.getKey().calculateHash().toBase64(), new Exception("Stored by"));
return false;
}
boolean isNew = true;
synchronized (_blogInfo) {
BlogInfo old = (BlogInfo)_blogInfo.get(info.getKey().calculateHash());
@ -211,7 +227,13 @@ public class Archive {
if (!entryDir.exists())
entryDir.mkdirs();
boolean ok = _extractor.extract(entryFile, entryDir, null, info);
boolean ok = true;
try {
ok = _extractor.extract(entryFile, entryDir, null, info);
} catch (IOException ioe) {
ok = false;
_log.error("Error extracting " + entryFile.getPath() + ", deleting it", ioe);
}
if (!ok) {
File files[] = entryDir.listFiles();
for (int i = 0; i < files.length; i++)
@ -267,8 +289,9 @@ public class Archive {
if (blogKey == null) {
// no key, cache.
File entryDir = getEntryDir(entries[i]);
if (entryDir.exists())
if (entryDir.exists()) {
entry = getCachedEntry(entryDir);
}
if ((entry == null) || !entryDir.exists()) {
if (!extractEntry(entries[i], entryDir, info)) {
_log.error("Entry " + entries[i].getPath() + " is not valid");
@ -326,6 +349,15 @@ public class Archive {
return rv;
}
public synchronized void delete(Hash blog) {
if (blog == null) return;
File blogDir = new File(_rootDir, blog.toBase64());
boolean deleted = FileUtil.rmdir(blogDir, false);
File cacheDir = new File(_cacheDir, blog.toBase64());
deleted = FileUtil.rmdir(cacheDir, false) && deleted;
_log.info("Deleted blog " + blog.toBase64() + " completely? " + deleted);
}
public boolean storeEntry(EntryContainer container) {
if (container == null) return false;
BlogURI uri = container.getURI();

View File

@ -74,7 +74,7 @@ public class BlogManager {
_cacheDir.mkdirs();
_userDir.mkdirs();
_tempDir.mkdirs();
_archive = new Archive(ctx, _archiveDir.getAbsolutePath(), _cacheDir.getAbsolutePath());
_archive = new Archive(ctx, _archiveDir.getAbsolutePath(), _cacheDir.getAbsolutePath(), this);
if (regenIndex)
_archive.regenerateIndex();
}
@ -890,6 +890,8 @@ public class BlogManager {
try {
BlogInfo info = new BlogInfo();
info.load(metadataStream);
if (isBanned(info.getKey().calculateHash()))
return false;
return _archive.storeBlogInfo(info);
} catch (IOException ioe) {
_log.error("Error importing meta", ioe);
@ -906,6 +908,8 @@ public class BlogManager {
try {
EntryContainer c = new EntryContainer();
c.load(entryStream);
if (isBanned(c.getURI().getKeyHash()))
return false;
return _archive.storeEntry(c);
} catch (IOException ioe) {
_log.error("Error importing entry", ioe);
@ -1060,4 +1064,49 @@ public class BlogManager {
return true;
return false;
}
public boolean isBanned(Hash blog) {
if ( (blog == null) || (blog.getData() == null) || (blog.getData().length <= 0) ) return false;
String str = blog.toBase64();
String banned = System.getProperty("syndie.bannedBlogs", "");
return (banned.indexOf(str) >= 0);
}
public String[] getBannedBlogs() {
List blogs = new ArrayList();
String str = System.getProperty("syndie.bannedBlogs", "");
StringTokenizer tok = new StringTokenizer(str, ",");
while (tok.hasMoreTokens()) {
String blog = tok.nextToken();
try {
Hash h = new Hash();
h.fromBase64(blog);
blogs.add(blog); // the base64 string, but verified
} catch (DataFormatException dfe) {
// ignored
}
}
String rv[] = new String[blogs.size()];
for (int i = 0; i < blogs.size(); i++)
rv[i] = (String)blogs.get(i);
return rv;
}
/**
* Delete the blog from the archive completely, and ban them from ever being added again
*/
public void purgeAndBan(Hash blog) {
String banned[] = getBannedBlogs();
StringBuffer buf = new StringBuffer();
String str = blog.toBase64();
buf.append(str);
for (int i = 0; banned != null && i < banned.length; i++) {
if (!banned[i].equals(str))
buf.append(",").append(banned[i]);
}
System.setProperty("syndie.bannedBlogs", buf.toString());
writeConfig();
_archive.delete(blog);
_archive.regenerateIndex();
}
}

View File

@ -59,9 +59,9 @@ public class EntryExtractor {
}
public void extract(EntryContainer entry, File entryDir) throws IOException {
extractEntry(entry, entryDir);
extractHeaders(entry, entryDir);
extractMeta(entry, entryDir);
extractEntry(entry, entryDir);
Attachment attachments[] = entry.getAttachments();
if (attachments != null) {
for (int i = 0; i < attachments.length; i++) {
@ -97,10 +97,14 @@ public class EntryExtractor {
}
}
private void extractEntry(EntryContainer entry, File entryDir) throws IOException {
Entry e = entry.getEntry();
if (e == null) throw new IOException("Entry is null");
String text = e.getText();
if (text == null) throw new IOException("Entry text is null");
FileOutputStream out = null;
try {
out = new FileOutputStream(new File(entryDir, ENTRY));
out.write(DataHelper.getUTF8(entry.getEntry().getText()));
out.write(DataHelper.getUTF8(text));
} finally {
out.close();
}

View File

@ -46,9 +46,6 @@ public class Sucker {
public Sucker() {}
/**
* Constructor for BlogManager.
*/
public Sucker(String[] strings) throws IllegalArgumentException {
SuckerState state = new SuckerState();
state.pushToSyndie=true;
@ -75,6 +72,7 @@ public class Sucker {
state.user = state.bm.getUser(blogHash);
if(state.user==null)
throw new IllegalArgumentException("wtf, user==null? hash:"+blogHash);
state.history = new ArrayList();
_state = state;
}
@ -162,11 +160,6 @@ public class Sucker {
_log.debug("message number: " + _state.messageNumber);
// Create historyFile if missing
_state.historyFile = new File(_state.historyPath);
if (!_state.historyFile.exists())
_state.historyFile.createNewFile();
_state.shouldProxy = false;
_state.proxyPortNum = -1;
if ( (_state.proxyHost != null) && (_state.proxyPort != null) ) {
@ -194,7 +187,7 @@ public class Sucker {
boolean ok = lsnr.waitForSuccess();
if (!ok) {
_log.debug("success? " + ok);
System.err.println("Unable to retrieve the url after " + numRetries + " tries.");
System.err.println("Unable to retrieve the url [" + _state.urlToLoad + "] after " + numRetries + " tries.");
fetched.delete();
return _state.entriesPosted;
}
@ -213,31 +206,56 @@ public class Sucker {
_log.debug("entries: " + entries.size());
FileOutputStream hos = null;
loadHistory();
// Process list backwards to get syndie to display the
// entries in the right order. (most recent at top)
List feedMessageIds = new ArrayList();
for (int i = entries.size()-1; i >= 0; i--) {
SyndEntry e = (SyndEntry) entries.get(i);
try {
hos = new FileOutputStream(_state.historyFile, true);
_state.attachmentCounter=0;
if (_log.shouldLog(Log.DEBUG))
_log.debug("Syndicate entry: " + e.getLink());
// Calculate messageId, and check if we have got the message already
String feedHash = sha1(_state.urlToLoad);
String itemHash = sha1(e.getTitle() + e.getDescription());
Date d = e.getPublishedDate();
String time;
if(d!=null)
time = "" + d.getTime();
else
time = "" + new Date().getTime();
String outputFileName = _state.outputDir + "/" + _state.messageNumber;
String messageId = feedHash + ":" + itemHash + ":" + time + ":" + outputFileName;
// Process list backwards to get syndie to display the
// entries in the right order. (most recent at top)
for (int i = entries.size()-1; i >= 0; i--) {
SyndEntry e = (SyndEntry) entries.get(i);
_state.attachmentCounter=0;
if (_log.shouldLog(Log.DEBUG))
_log.debug("Syndicate entry: " + e.getLink());
String messageId = convertToSml(_state, e);
if (messageId!=null) {
hos.write(messageId.getBytes());
hos.write("\n".getBytes());
}
}
} finally {
if (hos != null) try { hos.close(); } catch (IOException ioe) {}
// Make sure these messageIds get into the history file
feedMessageIds.add(messageId);
// Check if we already have this
if (existsInHistory(_state, messageId))
continue;
infoLog("new: " + messageId);
// process the new entry
processEntry(_state, e, time);
}
// update history
pruneHistory(_state.urlToLoad, 42*10); // could use 0 if we were sure old entries never re-appear
Iterator iter = feedMessageIds.iterator();
while(iter.hasNext())
{
String newMessageId = (String)iter.next();
if(!existsInHistory(_state, newMessageId))
addHistory(newMessageId); // add new message ids from current feed to history
}
storeHistory();
// call script if we don't just feed syndie
if(!_state.pushToSyndie) {
FileOutputStream fos = null;
try {
@ -264,6 +282,111 @@ public class Sucker {
return _state.entriesPosted;
}
private void loadHistory() {
try {
// Create historyFile if missing
_state.historyFile = new File(_state.historyPath);
if (!_state.historyFile.exists())
_state.historyFile.createNewFile();
FileInputStream is = new FileInputStream(_state.historyFile);
String s;
while((s=readLine(is))!=null)
{
addHistory(s);
}
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private boolean existsInHistory(SuckerState state, String messageId) {
int idx;
idx = messageId.lastIndexOf(":");
String lineToCompare = messageId.substring(0, idx-1);
idx = lineToCompare.lastIndexOf(":");
lineToCompare = lineToCompare.substring(0, idx-1);
Iterator iter = _state.history.iterator();
while(iter.hasNext())
{
String line = (String)iter.next();
idx = line.lastIndexOf(":");
if (idx < 0)
return false;
line = line.substring(0, idx-1);
idx = line.lastIndexOf(":");
if (idx < 0)
return false;
line = line.substring(0, idx-1);
if (line.equals(lineToCompare))
return true;
}
return false;
}
private void addHistory(String messageId) {
_state.history.add(messageId);
}
private void pruneHistory(String url, int nrToKeep) {
int i=0;
String urlHash=sha1(url);
// Count nr of entries containing url hash
Iterator iter = _state.history.iterator();
while(iter.hasNext())
{
String historyLine = (String) iter.next();
if(historyLine.startsWith(urlHash))
{
i++;
}
}
// keep first nrToKeep entries
i = i - nrToKeep;
if(i>0)
{
iter = _state.history.iterator();
while(i>0 && iter.hasNext())
{
String historyLine = (String) iter.next();
if(historyLine.startsWith(urlHash))
{
iter.remove();
i--;
}
}
}
}
private void storeHistory() {
FileOutputStream hos = null;
try {
hos = new FileOutputStream(_state.historyFile, false);
Iterator iter = _state.history.iterator();
while(iter.hasNext())
{
String historyLine = (String) iter.next();
hos.write(historyLine.getBytes());
hos.write("\n".getBytes());
}
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} finally {
if (hos != null) try { hos.close(); } catch (IOException ioe) {}
}
}
public static void main(String[] args) {
Sucker sucker = new Sucker();
boolean ok = sucker.parseArgs(args);
@ -288,8 +411,6 @@ public class Sucker {
*/
private static boolean execPushScript(SuckerState state, String id, String time) {
try {
String ls_str;
String cli = state.pushScript + " " + state.outputDir + " " + id + " " + time;
Process pushScript_proc = Runtime.getRuntime().exec(cli);
@ -327,28 +448,11 @@ public class Sucker {
/**
* Converts the SyndEntry e to sml and fetches any images as attachments
*/
private static String convertToSml(SuckerState state, SyndEntry e) {
private static boolean processEntry(SuckerState state, SyndEntry e, String time) {
String subject;
state.stripNewlines=false;
// Calculate messageId, and check if we have got the message already
String feedHash = sha1(state.urlToLoad);
String itemHash = sha1(e.getTitle() + e.getDescription());
Date d = e.getPublishedDate();
String time;
if(d!=null)
time = "" + d.getTime();
else
time = "" + new Date().getTime();
String outputFileName = state.outputDir + "/" + state.messageNumber;
String messageId = feedHash + ":" + itemHash + ":" + time + ":" + outputFileName;
// Check if we already have this
if (existsInHistory(state, messageId))
return null;
infoLog("new: " + messageId);
try {
String sml="";
@ -370,7 +474,6 @@ public class Sucker {
List l = e.getContents();
if(l!=null)
{
debugLog("There is content");
iter = l.iterator();
while(iter.hasNext())
{
@ -402,8 +505,8 @@ public class Sucker {
}
String source=e.getLink(); //Uri();
if(source.indexOf("http")<0)
source=state.baseUrl+source;
if(!source.startsWith("http://"))
source=state.baseUrl+source;
sml += "[link schema=\"web\" location=\""+source+"\"]source[/link]\n";
if(state.pushToSyndie) {
@ -426,7 +529,7 @@ public class Sucker {
if(uri==null) {
errorLog("pushToSyndie failure.");
return null;
return false;
} else {
state.entriesPosted.add(uri);
infoLog("pushToSyndie success, uri: "+uri.toString());
@ -448,14 +551,14 @@ public class Sucker {
}
state.messageNumber++;
deleteTempFiles(state);
return messageId;
return true;
} catch (FileNotFoundException e1) {
e1.printStackTrace();
} catch (IOException e2) {
e2.printStackTrace();
}
deleteTempFiles(state);
return null;
return false;
}
private static void deleteTempFiles(SuckerState state) {
@ -570,7 +673,7 @@ public class Sucker {
ret+="[/img]";
if(imageLink.indexOf("http")<0)
if(!imageLink.startsWith("http://"))
imageLink=state.baseUrl+"/"+imageLink;
fetchAttachment(state, imageLink);
@ -592,7 +695,7 @@ public class Sucker {
if (b >= htmlTagLowerCase.length())
return null; // abort the b0rked tag
String link=htmlTag.substring(a,b);
if(link.indexOf("http")<0)
if(!link.startsWith("http://"))
link=state.baseUrl+"/"+link;
String schema="web";
@ -613,6 +716,7 @@ public class Sucker {
state.pendingEndLink=false;
return "[/link]";
}
return "";
}
if("<b>".equals(htmlTagLowerCase))
@ -645,8 +749,21 @@ public class Sucker {
return "";
if("</br>".equals(htmlTagLowerCase))
return "";
if(htmlTagLowerCase.startsWith("<table") || "</table>".equals(htmlTagLowerCase)) // emulate table with hr
if(htmlTagLowerCase.startsWith("<hr"))
return "";
if("</img>".equals(htmlTagLowerCase))
return "";
if("</font>".equals(htmlTagLowerCase))
return "";
if("<blockquote>".equals(htmlTagLowerCase))
return "[quote]";
if("</blockquote>".equals(htmlTagLowerCase))
return "[/quote]";
if(htmlTagLowerCase.startsWith("<table") || "</table>".equals(htmlTagLowerCase)) // emulate table with hr :)
return "[hr][/hr]";
if(htmlTagLowerCase.startsWith("<font"))
return "";
for(int i=0;i<ignoreTags.length;i++) {
String openTag = "<"+ignoreTags[i];
@ -754,36 +871,6 @@ public class Sucker {
return -1;
}
private static boolean existsInHistory(SuckerState state, String messageId) {
int idx;
idx = messageId.lastIndexOf(":");
String lineToCompare = messageId.substring(0, idx-1);
idx = lineToCompare.lastIndexOf(":");
lineToCompare = lineToCompare.substring(0, idx-1);
FileInputStream his = null;
try {
his = new FileInputStream(state.historyFile);
String line;
while ((line = readLine(his)) != null) {
idx = line.lastIndexOf(":");
if (idx < 0)
return false;
line = line.substring(0, idx-1);
idx = line.lastIndexOf(":");
if (idx < 0)
return false;
line = line.substring(0, idx-1);
if (line.equals(lineToCompare))
return true;
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} finally {
if (his != null) try { his.close(); } catch (IOException ioe) {}
}
return false;
}
private static String sha1(String s) {
try {
MessageDigest md = MessageDigest.getInstance("SHA");
@ -805,10 +892,10 @@ public class Sucker {
c = in.read();
} catch (IOException e) {
e.printStackTrace();
break;
return null;
}
if (c < 0)
break;
return null;
if (c == '\n')
break;
sb.append((char) c);
@ -897,6 +984,7 @@ class SuckerState {
BlogManager bm;
User user;
List entriesPosted;
List history;
//
List fileNames;

View File

@ -106,7 +106,12 @@ public class User {
public String getUsername() { return _username; }
public String getUserHash() { return _userHash; }
public Hash getBlog() { return _blog; }
public String getBlogStr() { return Base64.encode(_blog.getData()); }
public String getBlogStr() {
if (_blog != null)
return Base64.encode(_blog.getData());
else
return null;
}
public long getMostRecentEntry() { return _mostRecentEntry; }
public Map getBlogGroups() { return _blogGroups; }
public List getShitlistedBlogs() { return _shitlistedBlogs; }

View File

@ -163,8 +163,9 @@ public class ArchiveIndex {
/** list of unique blogs locally known (set of Hash) */
public Set getUniqueBlogs() {
Set rv = new HashSet();
for (int i = 0; i < _blogs.size(); i++)
for (int i = 0; i < _blogs.size(); i++) {
rv.add(getBlog(i));
}
return rv;
}
public List getReplies(BlogURI uri) {
@ -367,7 +368,10 @@ public class ArchiveIndex {
return;
tok.nextToken();
String keyStr = tok.nextToken();
Hash keyHash = new Hash(Base64.decode(keyStr));
byte k[] = Base64.decode(keyStr);
if ( (k == null) || (k.length != Hash.HASH_LENGTH) )
return; // ignore bad hashes
Hash keyHash = new Hash(k);
String whenStr = tok.nextToken();
long when = getIndexDate(whenStr);
String tag = tok.nextToken();

View File

@ -60,7 +60,7 @@ public class EntryContainer {
this();
_entryURI = uri;
if ( (smlData == null) || (smlData.length <= 0) )
_entryData = new Entry(null);
_entryData = new Entry(""); //null);
else
_entryData = new Entry(DataHelper.getUTF8(smlData));
setHeader(HEADER_BLOGKEY, Base64.encode(uri.getKeyHash().getData()));
@ -277,7 +277,7 @@ public class EntryContainer {
}
if (_entryData == null)
_entryData = new Entry(null);
_entryData = new Entry(""); //null);
_attachments = new Attachment[attachments.size()];

View File

@ -34,7 +34,9 @@ public class BlogRenderer extends HTMLRenderer {
}
public void receiveHeaderEnd() {
_preBodyBuffer.append("<div class=\"syndieBlogPost\"><hr style=\"display: none\" />\n");
_preBodyBuffer.append("<div class=\"syndieBlogPost\" id=\"");
_preBodyBuffer.append(_entry.getURI().getKeyHash().toBase64()).append('/').append(_entry.getURI().getEntryId());
_preBodyBuffer.append("\"><hr style=\"display: none\" />\n");
_preBodyBuffer.append("<div class=\"syndieBlogPostHeader\">\n");
_preBodyBuffer.append("<div class=\"syndieBlogPostSubject\">");
String subject = (String)_headers.get(HEADER_SUBJECT);
@ -160,12 +162,24 @@ public class BlogRenderer extends HTMLRenderer {
protected String getEntryURL(boolean showImages) {
return getEntryURL(_entry, _blog, showImages);
}
static String getEntryURL(EntryContainer entry, BlogInfo blog, boolean showImages) {
static String getEntryURL(EntryContainer entry, BlogInfo blog, boolean showImages) {
if (entry == null) return "unknown";
return "blog.jsp?"
+ ViewBlogServlet.PARAM_BLOG + "=" + (blog != null ? blog.getKey().calculateHash().toBase64() : "") + "&amp;"
+ ViewBlogServlet.PARAM_ENTRY + "="
+ Base64.encode(entry.getURI().getKeyHash().getData()) + '/' + entry.getURI().getEntryId();
return getEntryURL(entry.getURI(), blog, null, showImages);
}
static String getEntryURL(BlogURI entry, BlogInfo blog, BlogURI comment, boolean showImages) {
if (entry == null) return "unknown";
if (comment == null) {
return "blog.jsp?"
+ ViewBlogServlet.PARAM_BLOG + "=" + (blog != null ? blog.getKey().calculateHash().toBase64() : "") + "&amp;"
+ ViewBlogServlet.PARAM_ENTRY + "="
+ Base64.encode(entry.getKeyHash().getData()) + '/' + entry.getEntryId();
} else {
return "blog.jsp?"
+ ViewBlogServlet.PARAM_BLOG + "=" + (blog != null ? blog.getKey().calculateHash().toBase64() : "") + "&amp;"
+ ViewBlogServlet.PARAM_ENTRY + "="
+ Base64.encode(entry.getKeyHash().getData()) + '/' + entry.getEntryId()
+ '#' + Base64.encode(comment.getKeyHash().getData()) + '/' + comment.getEntryId();
}
}
protected String getAttachmentURLBase() {
@ -218,4 +232,4 @@ public class BlogRenderer extends HTMLRenderer {
buf.append(ViewBlogServlet.PARAM_OFFSET).append('=').append(pageNum*numPerPage).append("&amp;");
return buf.toString();
}
}
}

View File

@ -122,36 +122,52 @@ public class ThreadedHTMLRenderer extends HTMLRenderer {
public static String getViewPostLink(String uri, ThreadNode node, User user, boolean isPermalink,
String offset, String tags, String author, boolean authorOnly) {
StringBuffer buf = new StringBuffer(64);
buf.append(uri);
if (node.getChildCount() > 0) {
buf.append('?').append(PARAM_VISIBLE).append('=');
ThreadNode child = node.getChild(0);
buf.append(child.getEntry().getKeyHash().toBase64()).append('/');
buf.append(child.getEntry().getEntryId()).append('&');
if (isPermalink) {
// link to the blog view of the original poster
BlogURI rootBlog = null;
ThreadNode parent = node;
while (parent != null) {
if (parent.getParent() != null) {
parent = parent.getParent();
} else {
rootBlog = parent.getEntry();
break;
}
}
BlogInfo root = BlogManager.instance().getArchive().getBlogInfo(rootBlog.getKeyHash());
return BlogRenderer.getEntryURL(parent.getEntry(), root, node.getEntry(), true);
} else {
buf.append('?').append(PARAM_VISIBLE).append('=');
StringBuffer buf = new StringBuffer(64);
buf.append(uri);
if (node.getChildCount() > 0) {
buf.append('?').append(PARAM_VISIBLE).append('=');
ThreadNode child = node.getChild(0);
buf.append(child.getEntry().getKeyHash().toBase64()).append('/');
buf.append(child.getEntry().getEntryId()).append('&');
} else {
buf.append('?').append(PARAM_VISIBLE).append('=');
buf.append(node.getEntry().getKeyHash().toBase64()).append('/');
buf.append(node.getEntry().getEntryId()).append('&');
}
buf.append(PARAM_VIEW_POST).append('=');
buf.append(node.getEntry().getKeyHash().toBase64()).append('/');
buf.append(node.getEntry().getEntryId()).append('&');
if (!isPermalink) {
if (!empty(offset))
buf.append(PARAM_OFFSET).append('=').append(offset).append('&');
if (!empty(tags))
buf.append(PARAM_TAGS).append('=').append(tags).append('&');
}
if (authorOnly && !empty(author)) {
buf.append(PARAM_AUTHOR).append('=').append(author).append('&');
buf.append(PARAM_THREAD_AUTHOR).append("=true&");
} else if (!isPermalink && !empty(author))
buf.append(PARAM_AUTHOR).append('=').append(author).append('&');
return buf.toString();
}
buf.append(PARAM_VIEW_POST).append('=');
buf.append(node.getEntry().getKeyHash().toBase64()).append('/');
buf.append(node.getEntry().getEntryId()).append('&');
if (!isPermalink) {
if (!empty(offset))
buf.append(PARAM_OFFSET).append('=').append(offset).append('&');
if (!empty(tags))
buf.append(PARAM_TAGS).append('=').append(tags).append('&');
}
if (authorOnly && !empty(author)) {
buf.append(PARAM_AUTHOR).append('=').append(author).append('&');
buf.append(PARAM_THREAD_AUTHOR).append("=true&");
} else if (!isPermalink && !empty(author))
buf.append(PARAM_AUTHOR).append('=').append(author).append('&');
return buf.toString();
}
public static String getViewPostLink(String uri, BlogURI post, User user, boolean isPermalink,
@ -272,8 +288,7 @@ public class ThreadedHTMLRenderer extends HTMLRenderer {
out.write("\n<a href=\"");
out.write(getViewPostLink(baseURI, node, user, true, offset, requestTags, filteredAuthor, authorOnly));
out.write("\" title=\"Select a shareable link directly to this post\">permalink</a>\n");
out.write("\" title=\"Select a link directly to this post within the blog\">permalink</a>\n");
if (true || (!inlineReply) ) {
String refuseReply = (String)_headers.get(HEADER_REFUSE_REPLIES);

View File

@ -46,6 +46,7 @@ public class AddressesServlet extends BaseServlet {
public static final String ACTION_DELETE_BLOG = "Delete author";
public static final String ACTION_UPDATE_BLOG = "Update author";
public static final String ACTION_ADD_BLOG = "Add author";
public static final String ACTION_PURGE_AND_BAN_BLOG = "Purge and ban author";
public static final String ACTION_DELETE_ARCHIVE = "Delete archive";
public static final String ACTION_UPDATE_ARCHIVE = "Update archive";
@ -128,6 +129,8 @@ public class AddressesServlet extends BaseServlet {
if (pn.isMember(FilteredThreadIndex.GROUP_IGNORE)) {
out.write("Ignored? <input type=\"checkbox\" name=\"" + PARAM_IGNORE
+ "\" checked=\"true\" value=\"true\" title=\"If true, their threads are hidden\" /> ");
if (BlogManager.instance().authorizeRemote(user))
out.write("<input type=\"submit\" name=\"" + PARAM_ACTION + "\" value=\"" + ACTION_PURGE_AND_BAN_BLOG + "\" /> ");
} else {
out.write("Ignored? <input type=\"checkbox\" name=\"" + PARAM_IGNORE
+ "\" value=\"true\" title=\"If true, their threads are hidden\" /> ");

View File

@ -64,13 +64,13 @@ public abstract class BaseServlet extends HttpServlet {
* key=value& of params that need to be tacked onto an http request that updates data, to
* prevent spoofing
*/
protected static String getAuthActionParams() { return PARAM_AUTH_ACTION + '=' + _authNonce + '&'; }
protected static String getAuthActionParams() { return PARAM_AUTH_ACTION + '=' + _authNonce + "&amp;"; }
/**
* key=value& of params that need to be tacked onto an http request that updates data, to
* prevent spoofing
*/
public static void addAuthActionParams(StringBuffer buf) {
buf.append(PARAM_AUTH_ACTION).append('=').append(_authNonce).append('&');
buf.append(PARAM_AUTH_ACTION).append('=').append(_authNonce).append("&amp;");
}
public void service(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
@ -295,7 +295,7 @@ public abstract class BaseServlet extends HttpServlet {
if (AddressesServlet.ACTION_ADD_TAG.equals(action)) {
String name = req.getParameter(AddressesServlet.PARAM_NAME);
if (!user.getPetNameDB().containsName(name)) {
if ((name != null) && (name.trim().length() > 0) && (!user.getPetNameDB().containsName(name)) ) {
PetName pn = new PetName(name, AddressesServlet.NET_SYNDIE, AddressesServlet.PROTO_TAG, name);
user.getPetNameDB().add(pn);
BlogManager.instance().saveUser(user);
@ -307,7 +307,7 @@ public abstract class BaseServlet extends HttpServlet {
(AddressesServlet.ACTION_ADD_OTHER.equals(action)) ||
(AddressesServlet.ACTION_ADD_PEER.equals(action)) ) {
PetName pn = buildNewAddress(req);
if ( (pn != null) && (pn.getName() != null) && (pn.getLocation() != null) &&
if ( (pn != null) && (pn.getName() != null) && (pn.getName().trim().length() > 0) && (pn.getLocation() != null) &&
(!user.getPetNameDB().containsName(pn.getName())) ) {
user.getPetNameDB().add(pn);
BlogManager.instance().saveUser(user);
@ -329,6 +329,34 @@ public abstract class BaseServlet extends HttpServlet {
(AddressesServlet.ACTION_UPDATE_OTHER.equals(action)) ||
(AddressesServlet.ACTION_UPDATE_PEER.equals(action)) ) {
return updateAddress(user, req);
} else if (AddressesServlet.ACTION_PURGE_AND_BAN_BLOG.equals(action)) {
String name = req.getParameter(AddressesServlet.PARAM_NAME);
PetName pn = user.getPetNameDB().getByName(name);
if (pn != null) {
boolean purged = false;
if (BlogManager.instance().authorizeRemote(user)) {
Hash h = null;
BlogURI uri = new BlogURI(pn.getLocation());
if (uri.getKeyHash() != null) {
h = uri.getKeyHash();
}
if (h == null) {
byte b[] = Base64.decode(pn.getLocation());
if ( (b != null) && (b.length == Hash.HASH_LENGTH) )
h = new Hash(b);
}
if (h != null) {
BlogManager.instance().purgeAndBan(h);
purged = true;
}
}
if (purged) // force a new thread index
return true;
else
return false;
} else {
return false;
}
} else if ( (AddressesServlet.ACTION_DELETE_ARCHIVE.equals(action)) ||
(AddressesServlet.ACTION_DELETE_BLOG.equals(action)) ||
(AddressesServlet.ACTION_DELETE_EEPSITE.equals(action)) ||
@ -716,6 +744,8 @@ public abstract class BaseServlet extends HttpServlet {
for (Iterator iter = names.iterator(); iter.hasNext(); ) {
String name = (String) iter.next();
PetName pn = db.getByName(name);
if (pn == null)
continue;
String proto = pn.getProtocol();
String loc = pn.getLocation();
if (proto != null && loc != null && "syndieblog".equals(proto) && pn.isMember(FilteredThreadIndex.GROUP_FAVORITE)) {
@ -866,22 +896,22 @@ public abstract class BaseServlet extends HttpServlet {
ThreadNode child = node.getChild(0);
buf.append(ThreadedHTMLRenderer.PARAM_VISIBLE).append('=');
buf.append(child.getEntry().getKeyHash().toBase64()).append('/');
buf.append(child.getEntry().getEntryId()).append('&');
buf.append(child.getEntry().getEntryId()).append("&amp;");
}
if (!empty(viewPost))
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_POST).append('=').append(viewPost).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_POST).append('=').append(viewPost).append("&amp;");
else if (!empty(viewThread))
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_THREAD).append('=').append(viewThread).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_THREAD).append('=').append(viewThread).append("&amp;");
if (!empty(offset))
buf.append(ThreadedHTMLRenderer.PARAM_OFFSET).append('=').append(offset).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_OFFSET).append('=').append(offset).append("&amp;");
if (!empty(tags))
buf.append(ThreadedHTMLRenderer.PARAM_TAGS).append('=').append(tags).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_TAGS).append('=').append(tags).append("&amp;");
if (!empty(author))
buf.append(ThreadedHTMLRenderer.PARAM_AUTHOR).append('=').append(author).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_AUTHOR).append('=').append(author).append("&amp;");
return buf.toString();
}
@ -901,21 +931,21 @@ public abstract class BaseServlet extends HttpServlet {
// collapse node == let the node be visible
buf.append('?').append(ThreadedHTMLRenderer.PARAM_VISIBLE).append('=');
buf.append(node.getEntry().getKeyHash().toBase64()).append('/');
buf.append(node.getEntry().getEntryId()).append('&');
buf.append(node.getEntry().getEntryId()).append("&amp;");
if (!empty(viewPost))
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_POST).append('=').append(viewPost).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_POST).append('=').append(viewPost).append("&amp;");
else if (!empty(viewThread))
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_THREAD).append('=').append(viewThread).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_THREAD).append('=').append(viewThread).append("&amp;");
if (!empty(offset))
buf.append(ThreadedHTMLRenderer.PARAM_OFFSET).append('=').append(offset).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_OFFSET).append('=').append(offset).append("&amp;");
if (!empty(tags))
buf.append(ThreadedHTMLRenderer.PARAM_TAGS).append('=').append(tags).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_TAGS).append('=').append(tags).append("&amp;");
if (!empty(author))
buf.append(ThreadedHTMLRenderer.PARAM_AUTHOR).append('=').append(author).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_AUTHOR).append('=').append(author).append("&amp;");
return buf.toString();
}
@ -939,23 +969,23 @@ public abstract class BaseServlet extends HttpServlet {
buf.append(uri);
buf.append('?');
if (!empty(visible))
buf.append(ThreadedHTMLRenderer.PARAM_VISIBLE).append('=').append(visible).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_ADD_TO_GROUP_LOCATION).append('=').append(author.toBase64()).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_ADD_TO_GROUP_NAME).append('=').append(group).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_VISIBLE).append('=').append(visible).append("&amp;");
buf.append(ThreadedHTMLRenderer.PARAM_ADD_TO_GROUP_LOCATION).append('=').append(author.toBase64()).append("&amp;");
buf.append(ThreadedHTMLRenderer.PARAM_ADD_TO_GROUP_NAME).append('=').append(group).append("&amp;");
if (!empty(viewPost))
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_POST).append('=').append(viewPost).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_POST).append('=').append(viewPost).append("&amp;");
else if (!empty(viewThread))
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_THREAD).append('=').append(viewThread).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_THREAD).append('=').append(viewThread).append("&amp;");
if (!empty(offset))
buf.append(ThreadedHTMLRenderer.PARAM_OFFSET).append('=').append(offset).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_OFFSET).append('=').append(offset).append("&amp;");
if (!empty(tags))
buf.append(ThreadedHTMLRenderer.PARAM_TAGS).append('=').append(tags).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_TAGS).append('=').append(tags).append("&amp;");
if (!empty(filteredAuthor))
buf.append(ThreadedHTMLRenderer.PARAM_AUTHOR).append('=').append(filteredAuthor).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_AUTHOR).append('=').append(filteredAuthor).append("&amp;");
addAuthActionParams(buf);
return buf.toString();
@ -966,23 +996,23 @@ public abstract class BaseServlet extends HttpServlet {
buf.append(uri);
buf.append('?');
if (!empty(visible))
buf.append(ThreadedHTMLRenderer.PARAM_VISIBLE).append('=').append(visible).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_REMOVE_FROM_GROUP_NAME).append('=').append(name).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_REMOVE_FROM_GROUP).append('=').append(group).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_VISIBLE).append('=').append(visible).append("&amp;");
buf.append(ThreadedHTMLRenderer.PARAM_REMOVE_FROM_GROUP_NAME).append('=').append(name).append("&amp;");
buf.append(ThreadedHTMLRenderer.PARAM_REMOVE_FROM_GROUP).append('=').append(group).append("&amp;");
if (!empty(viewPost))
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_POST).append('=').append(viewPost).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_POST).append('=').append(viewPost).append("&amp;");
else if (!empty(viewThread))
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_THREAD).append('=').append(viewThread).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_THREAD).append('=').append(viewThread).append("&amp;");
if (!empty(offset))
buf.append(ThreadedHTMLRenderer.PARAM_OFFSET).append('=').append(offset).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_OFFSET).append('=').append(offset).append("&amp;");
if (!empty(tags))
buf.append(ThreadedHTMLRenderer.PARAM_TAGS).append('=').append(tags).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_TAGS).append('=').append(tags).append("&amp;");
if (!empty(filteredAuthor))
buf.append(ThreadedHTMLRenderer.PARAM_AUTHOR).append('=').append(filteredAuthor).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_AUTHOR).append('=').append(filteredAuthor).append("&amp;");
addAuthActionParams(buf);
return buf.toString();
@ -1024,24 +1054,23 @@ public abstract class BaseServlet extends HttpServlet {
}
buf.append('?').append(ThreadedHTMLRenderer.PARAM_VISIBLE).append('=');
buf.append(expandTo.getKeyHash().toBase64()).append('/');
buf.append(expandTo.getEntryId()).append('&');
buf.append(expandTo.getEntryId()).append("&amp;");
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_THREAD).append('=');
buf.append(node.getEntry().getKeyHash().toBase64()).append('/');
buf.append(node.getEntry().getEntryId()).append('&');
buf.append(node.getEntry().getEntryId()).append("&amp;");
if (!empty(offset))
buf.append(ThreadedHTMLRenderer.PARAM_OFFSET).append('=').append(offset).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_OFFSET).append('=').append(offset).append("&amp;");
if (!empty(tags))
buf.append(ThreadedHTMLRenderer.PARAM_TAGS).append('=').append(tags).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_TAGS).append('=').append(tags).append("&amp;");
if (!empty(author)) {
buf.append(ThreadedHTMLRenderer.PARAM_AUTHOR).append('=').append(author).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_AUTHOR).append('=').append(author).append("&amp;");
if (authorOnly)
buf.append(ThreadedHTMLRenderer.PARAM_THREAD_AUTHOR).append("=true&");
buf.append(ThreadedHTMLRenderer.PARAM_THREAD_AUTHOR).append("=true&amp;");
}
buf.append("#").append(node.getEntry().toString());
return buf.toString();
}

View File

@ -17,13 +17,6 @@ import net.i2p.syndie.sml.*;
/**
* Schedule the import of atom/rss feeds
*
* <p><h3>todo:</h3>
* <p>caching (eepget should do it)
* <p>enclosures support (requires cvs rome)
* <p>syndie.sucker.minHistory/maxHistory used to roll over the history file?
* <p>configurable update period
*
*/
public class ImportFeedServlet extends BaseServlet {
protected String getTitle() { return "Syndie :: Import feed"; }
@ -33,6 +26,7 @@ public class ImportFeedServlet extends BaseServlet {
if (!BlogManager.instance().authorizeRemote(user)) {
out.write("<tr><td colspan=\"3\"><span class=\"b_rssMsgErr\">You are not authorized for remote access.</span></td></tr>\n");
return;
} else {
out.write("<tr><td colspan=\"3\">");
@ -79,7 +73,6 @@ public class ImportFeedServlet extends BaseServlet {
}
}
} else if ( (action != null) && ("Delete".equals(action)) ) {
out.write("<span class=\"b_rssImportMsgErr\">Delete some thing</span><br />");
if (url == null || blog == null || tagPrefix == null) {
out.write("<span class=\"b_rssImportMsgErr\">error, some fields were empty.</span><br />");
} else {

View File

@ -62,6 +62,8 @@ public class RemoteArchiveBean {
}
private boolean ignoreBlog(User user, Hash blog) {
if (BlogManager.instance().isBanned(blog))
return true;
PetNameDB db = user.getPetNameDB();
PetName pn = db.getByLocation(blog.toBase64());
return ( (pn!= null) && (pn.isMember("Ignore")) );
@ -639,6 +641,8 @@ public class RemoteArchiveBean {
int newBlogs = 0;
for (Iterator iter = remoteBlogs.iterator(); iter.hasNext(); ) {
Hash blog = (Hash)iter.next();
if ( (blog == null) || (blog.getData() == null) || (blog.getData().length <= 0) )
continue;
if (ignoreBlog(user, blog))
continue;
if (!localBlogs.contains(blog)) {

View File

@ -85,4 +85,360 @@ td.s_detail_summDetail {
td.s_summary_summ {
font-size: 0.8em;
background-color: #DDDDFF;
}
<!-- following are doubtful salmon's contributions -->
body {
margin : 0px;
padding : 0px;
width: 99%;
font-family : Arial, sans-serif, Helvetica;
background-color : #FFF;
color : black;
font-size : 100%;
/* we've avoided Tantek Hacks so far,
** but we can't avoid using the non-w3c method of
** box rendering. (and therefore one of mozilla's
** proprietry -moz properties (which hopefully they'll
** drop soon).
*/
-moz-box-sizing : border-box;
box-sizing : border-box;
}
a:link{color:#007}
a:visited{color:#606}
a:hover{color:#720}
a:active{color:#900}
select {
min-width: 1.5em;
}
.overallTable {
border-spacing: 0px;
border-collapse: collapse;
float: left;
}
.topNav {
background-color: #BBB;
}
.topNav_user {
text-align: left;
float: left;
display: inline;
}
.topNav_admin {
text-align: right;
float: right;
margin: 0 5px 0 0;
display: inline;
}
.controlBar {
border-bottom: thick double #CCF;
border-left: medium solid #CCF;
border-right: medium solid #CCF;
background-color: #EEF;
color: inherit;
font-size: small;
clear: left; /* fixes a bug in Opera */
}
.controlBarRight {
text-align: right;
}
.threadEven {
background-color: #FFF;
white-space: nowrap;
}
.threadOdd {
background-color: #FFC;
white-space: nowrap;
}
.threadLeft {
text-align: left;
align: left;
}
.threadNav {
background-color: #EEF;
border: medium solid #CCF;
}
.threadNavRight {
text-align: right;
float: right;
background-color: #EEF;
}
.rightOffset {
float: right;
margin: 0 5px 0 0;
display: inline;
}
.threadInfoLeft {
float: left;
margin: 5px 0px 0 0;
display: inline;
}
.threadInfoRight {
float: right;
margin: 0 5px 0 0;
display: inline;
}
.postMeta {
border-top: 1px solid black;
background-color: #FFB;
}
.postMetaSubject {
text-align: left;
font-size: large;
}
.postMetaLink {
text-align: right;
}
.postDetails {
background-color: #FFC;
}
.postReply {
background-color: #CCF;
}
.postReplyText {
background-color: #CCF;
}
.postReplyOptions {
background-color: #CCF;
}
.syndieBlogTopNav {
padding: 0.5em;
width: 98%;
border: medium solid #CCF;
background-color: #EEF;
font-size: small;
}
.syndieBlogTopNavUser {
text-align: left;
}
.syndieBlogTopNavAdmin {
text-align: right;
}
.syndieBlogHeader {
width: 100%;
font-size: 1.4em;
background-color: #000;
text-align: Left;
float: Left;
}
.syndieBlogHeader a {
color: #FFF;
padding: 4px;
}
.syndieBlogHeader a:hover {
color:#88F;
padding: 4px;
}
.syndieBlogLogo {
float: left;
display: inline;
}
.syndieBlogLinks {
width: 20%;
float: left;
}
.syndieBlogLinkGroup {
font-size: 0.8em;
background-color: #DDD;
border: 1px solid black;
margin: 5px;
padding: 2px;
}
.syndieBlogLinkGroup ul {
list-style: none;
}
.syndieBlogLinkGroup li {
}
.syndieBlogLinkGroupName {
font-weight: bold;
width: 100%;
border-bottom: 1px dashed black;
display: block;
}
.syndieBlogPostInfoGroup {
font-size: 0.8em;
background-color: #FFEA9F;
border: 1px solid black;
margin: 5px;
padding: 2px;
}
.syndieBlogPostInfoGroup ol {
list-style: none;
}
.syndieBlogPostInfoGroup li {
}
.syndieBlogPostInfoGroup li a {
display: block;
}
.syndieBlogPostInfoGroupName {
font-weight: bold;
width: 100%;
border-bottom: 1px dashed black;
display: block;
}
.syndieBlogMeta {
text-align: left;
font-size: 0.8em;
background-color: #DDD;
border: 1px solid black;
margin: 5px;
padding: 2px;
}
.syndieBlogBody {
width: 80%;
float: left;
}
.syndieBlogPost {
border: 1px solid black;
margin-top: 5px;
margin-right: 5px;
}
.syndieBlogPostHeader {
background-color: #FFB;
padding: 2px;
}
.syndieBlogPostSubject {
font-weight: bold;
}
.syndieBlogPostFrom {
text-align: right;
}
.syndieBlogPostSummary {
background-color: #FFF;
padding: 2px;
}
.syndieBlogPostDetails {
background-color: #FFC;
padding: 2px;
}
.syndieBlogNav {
text-align: center;
}
.syndieBlogComments {
border: none;
margin-top: 5px;
margin-left: 0px;
float: left;
}
.syndieBlogComments ul {
list-style: none;
margin-left: 10px;
}
.syndieBlogCommentInfoGroup {
font-size: 0.8em;
margin-right: 5px;
}
.syndieBlogCommentInfoGroup ol {
list-style: none;
}
.syndieBlogCommentInfoGroup li {
}
.syndieBlogCommentInfoGroup li a {
display: block;
}
.syndieBlogCommentInfoGroupName {
font-size: 0.8em;
font-weight: bold;
}
.syndieBlogFavorites {
float: left;
margin: 5px 0px 0 0;
display: inline;
}
.syndieBlogList {
float: right;
margin: 5px 0px 0 0;
display: inline;
}
.b_topnavUser {
text-align: right;
background-color: #CCD;
}
.b_topnavHome {
background-color: #CCD;
color: #000;
width: 50px;
text-align: left;
}
.b_topnav {
background-color: #CCD;
}
.b_content {
}
.s_summary_overall {
}
.s_detail_overall {
}
.s_detail_subject {
font-size: 0.8em;
text-align: left;
background-color: #CCF;
}
.s_detail_quote {
margin-left: 1em;
border: 1px solid #DBDBDB;
background-color: #E0E0E0;
}
.s_detail_italic {
font-style: italic;
}
.s_detail_bold {
font-style: normal;
font-weight: bold;
}
.s_detail_underline {
font-style: normal;
text-decoration: underline;
}
.s_detail_meta {
font-size: 0.8em;
text-align: right;
background-color: #CCF;
}
.s_summary_subject {
font-size: 0.8em;
text-align: left;
background-color: #CCF;
}
.s_summary_meta {
font-size: 0.8em;
text-align: right;
background-color: #CCF;
}
.s_summary_quote {
margin-left: 1em;
border-width: 1px solid #DBDBDB;
background-color: #E0E0E0;
}
.s_summary_italic {
font-style: italic;
}
.s_summary_bold {
font-style: normal;
font-weight: bold;
}
.s_summary_underline {
font-style: normal;
text-decoration: underline;
}
.s_summary_summDetail {
font-size: 0.8em;
}
.s_detail_summDetail {
}
.s_detail_summDetailBlog {
}
.s_detail_summDetailBlogLink {
}
td.s_detail_summDetail {
background-color: #CCF;
}
td.s_summary_summ { width: 80%;
font-size: 0.8em;
background-color: #CCF;
}

View File

@ -169,7 +169,14 @@ public class SysTray implements SysTrayMenuListener {
_itemOpenConsole.addSysTrayMenuListener(this);
// _sysTrayMenu.addItem(_itemShutdown);
// _sysTrayMenu.addSeparator();
_sysTrayMenu.addItem(_itemSelectBrowser);
// hide it, as there have been reports of b0rked behavior on some JVMs.
// specifically, that on XP & sun1.5.0.1, a user launching i2p w/out the
// service wrapper would create netDb/, peerProfiles/, and other files
// underneath each directory browsed to - as if the router's "." directory
// is changing whenever the itemSelectBrowser's JFileChooser changed
// directories. This has not been reproduced or confirmed yet, but is
// pretty scary, and this function isn't too necessary.
//_sysTrayMenu.addItem(_itemSelectBrowser);
_sysTrayMenu.addItem(_itemOpenConsole);
refreshDisplay();
}

View File

@ -306,10 +306,10 @@
<copy file="build/sam.jar" todir="pkg-temp/lib/" />
<copy file="build/router.jar" todir="pkg-temp/lib/" />
<copy file="build/routerconsole.jar" todir="pkg-temp/lib/" />
<copy file="build/sucker.jar" todir="pkg-temp/lib" />
<!-- <copy file="build/sucker.jar" todir="pkg-temp/lib" /> -->
<copy file="build/i2psnark.jar" todir="pkg-temp/lib" />
<copy file="installer/resources/runplain.sh" todir="pkg-temp/" />
<!--<copy file="installer/resources/runplain.sh" todir="pkg-temp/" />-->
<copy file="build/i2ptunnel.war" todir="pkg-temp/webapps/" />
<copy file="build/routerconsole.war" todir="pkg-temp/webapps/" />
@ -318,25 +318,29 @@
<copy file="build/susidns.war" todir="pkg-temp/webapps/" />
<copy file="build/syndie.war" todir="pkg-temp/webapps/" />
<copy file="build/i2psnark.war" todir="pkg-temp/webapps/" />
<copy file="apps/i2psnark/java/build/launch-i2psnark.jar" todir="pkg-temp/" />
<!-- <copy file="apps/i2psnark/java/build/launch-i2psnark.jar" todir="pkg-temp/" /> -->
<copy file="apps/i2psnark/jetty-i2psnark.xml" todir="pkg-temp/" />
<copy file="history.txt" todir="pkg-temp/" />
<mkdir dir="pkg-temp/docs/" />
<copy file="news.xml" todir="pkg-temp/docs/" />
<!--
<copy file="installer/resources/dnf-header.ht" todir="pkg-temp/docs/" />
<copy file="installer/resources/dnfp-header.ht" todir="pkg-temp/docs/" />
<copy file="installer/resources/dnfb-header.ht" todir="pkg-temp/docs/" />
<copy file="installer/resources/dnfh-header.ht" todir="pkg-temp/docs/" />
<copy file="installer/resources/ahelper-conflict-header.ht" todir="pkg-temp/docs/" />
-->
<!-- polecat: please put your modified toolbar.html in installer/resources/toolbar.html
and uncomment the following -->
<!-- <copy file="installer/resources/toolbar.html" todir="pkg-temp/docs/" /> -->
<!--
<mkdir dir="pkg-temp/docs/themes/" />
<copy todir="pkg-temp/docs/themes/" >
<fileset dir="installer/resources/themes/" />
</copy>
-->
<!-- the addressbook handles this for updates -->
<!-- <copy file="hosts.txt" todir="pkg-temp/" /> -->

View File

@ -0,0 +1,198 @@
package gnu.crypto.hash;
// ----------------------------------------------------------------------------
// $Id: BaseHashStandalone.java,v 1.1 2006/02/26 16:30:59 jrandom Exp $
//
// Copyright (C) 2001, 2002, Free Software Foundation, Inc.
//
// This file is part of GNU Crypto.
//
// GNU Crypto is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2, or (at your option)
// any later version.
//
// GNU Crypto is distributed in the hope that it will be useful, but
// WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
// General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program; see the file COPYING. If not, write to the
//
// Free Software Foundation Inc.,
// 51 Franklin Street, Fifth Floor,
// Boston, MA 02110-1301
// USA
//
// Linking this library statically or dynamically with other modules is
// making a combined work based on this library. Thus, the terms and
// conditions of the GNU General Public License cover the whole
// combination.
//
// As a special exception, the copyright holders of this library give
// you permission to link this library with independent modules to
// produce an executable, regardless of the license terms of these
// independent modules, and to copy and distribute the resulting
// executable under terms of your choice, provided that you also meet,
// for each linked independent module, the terms and conditions of the
// license of that module. An independent module is a module which is
// not derived from or based on this library. If you modify this
// library, you may extend this exception to your version of the
// library, but you are not obligated to do so. If you do not wish to
// do so, delete this exception statement from your version.
// ----------------------------------------------------------------------------
/**
* <p>A base abstract class to facilitate hash implementations.</p>
*
* @version $Revision: 1.1 $
*/
public abstract class BaseHashStandalone implements IMessageDigestStandalone {
// Constants and variables
// -------------------------------------------------------------------------
/** The canonical name prefix of the hash. */
protected String name;
/** The hash (output) size in bytes. */
protected int hashSize;
/** The hash (inner) block size in bytes. */
protected int blockSize;
/** Number of bytes processed so far. */
protected long count;
/** Temporary input buffer. */
protected byte[] buffer;
// Constructor(s)
// -------------------------------------------------------------------------
/**
* <p>Trivial constructor for use by concrete subclasses.</p>
*
* @param name the canonical name prefix of this instance.
* @param hashSize the block size of the output in bytes.
* @param blockSize the block size of the internal transform.
*/
protected BaseHashStandalone(String name, int hashSize, int blockSize) {
super();
this.name = name;
this.hashSize = hashSize;
this.blockSize = blockSize;
this.buffer = new byte[blockSize];
resetContext();
}
// Class methods
// -------------------------------------------------------------------------
// Instance methods
// -------------------------------------------------------------------------
// IMessageDigestStandalone interface implementation ---------------------------------
public String name() {
return name;
}
public int hashSize() {
return hashSize;
}
public int blockSize() {
return blockSize;
}
public void update(byte b) {
// compute number of bytes still unhashed; ie. present in buffer
int i = (int)(count % blockSize);
count++;
buffer[i] = b;
if (i == (blockSize - 1)) {
transform(buffer, 0);
}
}
public void update(byte[] b) {
update(b, 0, b.length);
}
public void update(byte[] b, int offset, int len) {
int n = (int)(count % blockSize);
count += len;
int partLen = blockSize - n;
int i = 0;
if (len >= partLen) {
System.arraycopy(b, offset, buffer, n, partLen);
transform(buffer, 0);
for (i = partLen; i + blockSize - 1 < len; i+= blockSize) {
transform(b, offset + i);
}
n = 0;
}
if (i < len) {
System.arraycopy(b, offset + i, buffer, n, len - i);
}
}
public byte[] digest() {
byte[] tail = padBuffer(); // pad remaining bytes in buffer
update(tail, 0, tail.length); // last transform of a message
byte[] result = getResult(); // make a result out of context
reset(); // reset this instance for future re-use
return result;
}
public void reset() { // reset this instance for future re-use
count = 0L;
for (int i = 0; i < blockSize; ) {
buffer[i++] = 0;
}
resetContext();
}
// methods to be implemented by concrete subclasses ------------------------
public abstract Object clone();
public abstract boolean selfTest();
/**
* <p>Returns the byte array to use as padding before completing a hash
* operation.</p>
*
* @return the bytes to pad the remaining bytes in the buffer before
* completing a hash operation.
*/
protected abstract byte[] padBuffer();
/**
* <p>Constructs the result from the contents of the current context.</p>
*
* @return the output of the completed hash operation.
*/
protected abstract byte[] getResult();
/** Resets the instance for future re-use. */
protected abstract void resetContext();
/**
* <p>The block digest transformation per se.</p>
*
* @param in the <i>blockSize</i> long block, as an array of bytes to digest.
* @param offset the index where the data to digest is located within the
* input buffer.
*/
protected abstract void transform(byte[] in, int offset);
}

View File

@ -0,0 +1,141 @@
package gnu.crypto.hash;
// ----------------------------------------------------------------------------
// $Id: IMessageDigestStandalone.java,v 1.1 2006/02/26 16:30:59 jrandom Exp $
//
// Copyright (C) 2001, 2002, Free Software Foundation, Inc.
//
// This file is part of GNU Crypto.
//
// GNU Crypto is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2, or (at your option)
// any later version.
//
// GNU Crypto is distributed in the hope that it will be useful, but
// WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
// General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program; see the file COPYING. If not, write to the
//
// Free Software Foundation Inc.,
// 51 Franklin Street, Fifth Floor,
// Boston, MA 02110-1301
// USA
//
// Linking this library statically or dynamically with other modules is
// making a combined work based on this library. Thus, the terms and
// conditions of the GNU General Public License cover the whole
// combination.
//
// As a special exception, the copyright holders of this library give
// you permission to link this library with independent modules to
// produce an executable, regardless of the license terms of these
// independent modules, and to copy and distribute the resulting
// executable under terms of your choice, provided that you also meet,
// for each linked independent module, the terms and conditions of the
// license of that module. An independent module is a module which is
// not derived from or based on this library. If you modify this
// library, you may extend this exception to your version of the
// library, but you are not obligated to do so. If you do not wish to
// do so, delete this exception statement from your version.
// ----------------------------------------------------------------------------
/**
* <p>The basic visible methods of any hash algorithm.</p>
*
* <p>A hash (or message digest) algorithm produces its output by iterating a
* basic compression function on blocks of data.</p>
*
* @version $Revision: 1.1 $
*/
public interface IMessageDigestStandalone extends Cloneable {
// Constants
// -------------------------------------------------------------------------
// Methods
// -------------------------------------------------------------------------
/**
* <p>Returns the canonical name of this algorithm.</p>
*
* @return the canonical name of this instance.
*/
String name();
/**
* <p>Returns the output length in bytes of this message digest algorithm.</p>
*
* @return the output length in bytes of this message digest algorithm.
*/
int hashSize();
/**
* <p>Returns the algorithm's (inner) block size in bytes.</p>
*
* @return the algorithm's inner block size in bytes.
*/
int blockSize();
/**
* <p>Continues a message digest operation using the input byte.</p>
*
* @param b the input byte to digest.
*/
void update(byte b);
/**
* <p>Continues a message digest operation, by filling the buffer, processing
* data in the algorithm's HASH_SIZE-bit block(s), updating the context and
* count, and buffering the remaining bytes in buffer for the next
* operation.</p>
*
* @param in the input block.
*/
void update(byte[] in);
/**
* <p>Continues a message digest operation, by filling the buffer, processing
* data in the algorithm's HASH_SIZE-bit block(s), updating the context and
* count, and buffering the remaining bytes in buffer for the next
* operation.</p>
*
* @param in the input block.
* @param offset start of meaningful bytes in input block.
* @param length number of bytes, in input block, to consider.
*/
void update(byte[] in, int offset, int length);
/**
* <p>Completes the message digest by performing final operations such as
* padding and resetting the instance.</p>
*
* @return the array of bytes representing the hash value.
*/
byte[] digest();
/**
* <p>Resets the current context of this instance clearing any eventually cached
* intermediary values.</p>
*/
void reset();
/**
* <p>A basic test. Ensures that the digest of a pre-determined message is equal
* to a known pre-computed value.</p>
*
* @return <tt>true</tt> if the implementation passes a basic self-test.
* Returns <tt>false</tt> otherwise.
*/
boolean selfTest();
/**
* <p>Returns a clone copy of this instance.</p>
*
* @return a clone copy of this instance.
*/
Object clone();
}

View File

@ -0,0 +1,276 @@
package gnu.crypto.hash;
// ----------------------------------------------------------------------------
// $Id: Sha256Standalone.java,v 1.2 2006/03/16 16:45:19 jrandom Exp $
//
// Copyright (C) 2003 Free Software Foundation, Inc.
//
// This file is part of GNU Crypto.
//
// GNU Crypto is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2, or (at your option)
// any later version.
//
// GNU Crypto is distributed in the hope that it will be useful, but
// WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
// General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program; see the file COPYING. If not, write to the
//
// Free Software Foundation Inc.,
// 51 Franklin Street, Fifth Floor,
// Boston, MA 02110-1301
// USA
//
// Linking this library statically or dynamically with other modules is
// making a combined work based on this library. Thus, the terms and
// conditions of the GNU General Public License cover the whole
// combination.
//
// As a special exception, the copyright holders of this library give
// you permission to link this library with independent modules to
// produce an executable, regardless of the license terms of these
// independent modules, and to copy and distribute the resulting
// executable under terms of your choice, provided that you also meet,
// for each linked independent module, the terms and conditions of the
// license of that module. An independent module is a module which is
// not derived from or based on this library. If you modify this
// library, you may extend this exception to your version of the
// library, but you are not obligated to do so. If you do not wish to
// do so, delete this exception statement from your version.
// ----------------------------------------------------------------------------
//import gnu.crypto.util.Util;
/**
* <p>Implementation of SHA2-1 [SHA-256] per the IETF Draft Specification.</p>
*
* <p>References:</p>
* <ol>
* <li><a href="http://ftp.ipv4.heanet.ie/pub/ietf/internet-drafts/draft-ietf-ipsec-ciph-aes-cbc-03.txt">
* Descriptions of SHA-256, SHA-384, and SHA-512</a>,</li>
* <li>http://csrc.nist.gov/cryptval/shs/sha256-384-512.pdf</li>
* </ol>
*
* Modified by jrandom@i2p.net to remove unnecessary gnu-crypto dependencies, and
* renamed from Sha256 to avoid conflicts with JVMs using gnu-crypto as their JCE
* provider.
*
* @version $Revision: 1.2 $
*/
public class Sha256Standalone extends BaseHashStandalone {
// Constants and variables
// -------------------------------------------------------------------------
private static final int[] k = {
0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5,
0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5,
0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3,
0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174,
0xe49b69c1, 0xefbe4786, 0x0fc19dc6, 0x240ca1cc,
0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da,
0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7,
0xc6e00bf3, 0xd5a79147, 0x06ca6351, 0x14292967,
0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13,
0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85,
0xa2bfe8a1, 0xa81a664b, 0xc24b8b70, 0xc76c51a3,
0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070,
0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5,
0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3,
0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208,
0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2
};
private static final int BLOCK_SIZE = 64; // inner block size in bytes
private static final String DIGEST0 =
"BA7816BF8F01CFEA414140DE5DAE2223B00361A396177A9CB410FF61F20015AD";
private static final int[] w = new int[64];
/** caches the result of the correctness test, once executed. */
private static Boolean valid;
/** 256-bit interim result. */
private int h0, h1, h2, h3, h4, h5, h6, h7;
// Constructor(s)
// -------------------------------------------------------------------------
/** Trivial 0-arguments constructor. */
public Sha256Standalone() {
super("sha256/standalone", 32, BLOCK_SIZE);
}
/**
* <p>Private constructor for cloning purposes.</p>
*
* @param md the instance to clone.
*/
private Sha256Standalone(Sha256Standalone md) {
this();
this.h0 = md.h0;
this.h1 = md.h1;
this.h2 = md.h2;
this.h3 = md.h3;
this.h4 = md.h4;
this.h5 = md.h5;
this.h6 = md.h6;
this.h7 = md.h7;
this.count = md.count;
this.buffer = (byte[]) md.buffer.clone();
}
// Class methods
// -------------------------------------------------------------------------
/*
public static final int[] G(int hh0, int hh1, int hh2, int hh3, int hh4,
int hh5, int hh6, int hh7, byte[] in, int offset) {
return sha(hh0, hh1, hh2, hh3, hh4, hh5, hh6, hh7, in, offset);
}
*/
// Instance methods
// -------------------------------------------------------------------------
// java.lang.Cloneable interface implementation ----------------------------
public Object clone() {
return new Sha256Standalone(this);
}
// Implementation of concrete methods in BaseHashStandalone --------------------------
private int transformResult[] = new int[8];
protected void transform(byte[] in, int offset) {
//int[] result = sha(h0, h1, h2, h3, h4, h5, h6, h7, in, offset);
sha(h0, h1, h2, h3, h4, h5, h6, h7, in, offset, transformResult);
h0 = transformResult[0];
h1 = transformResult[1];
h2 = transformResult[2];
h3 = transformResult[3];
h4 = transformResult[4];
h5 = transformResult[5];
h6 = transformResult[6];
h7 = transformResult[7];
}
protected byte[] padBuffer() {
int n = (int) (count % BLOCK_SIZE);
int padding = (n < 56) ? (56 - n) : (120 - n);
byte[] result = new byte[padding + 8];
// padding is always binary 1 followed by binary 0s
result[0] = (byte) 0x80;
// save number of bits, casting the long to an array of 8 bytes
long bits = count << 3;
result[padding++] = (byte)(bits >>> 56);
result[padding++] = (byte)(bits >>> 48);
result[padding++] = (byte)(bits >>> 40);
result[padding++] = (byte)(bits >>> 32);
result[padding++] = (byte)(bits >>> 24);
result[padding++] = (byte)(bits >>> 16);
result[padding++] = (byte)(bits >>> 8);
result[padding ] = (byte) bits;
return result;
}
protected byte[] getResult() {
return new byte[] {
(byte)(h0 >>> 24), (byte)(h0 >>> 16), (byte)(h0 >>> 8), (byte) h0,
(byte)(h1 >>> 24), (byte)(h1 >>> 16), (byte)(h1 >>> 8), (byte) h1,
(byte)(h2 >>> 24), (byte)(h2 >>> 16), (byte)(h2 >>> 8), (byte) h2,
(byte)(h3 >>> 24), (byte)(h3 >>> 16), (byte)(h3 >>> 8), (byte) h3,
(byte)(h4 >>> 24), (byte)(h4 >>> 16), (byte)(h4 >>> 8), (byte) h4,
(byte)(h5 >>> 24), (byte)(h5 >>> 16), (byte)(h5 >>> 8), (byte) h5,
(byte)(h6 >>> 24), (byte)(h6 >>> 16), (byte)(h6 >>> 8), (byte) h6,
(byte)(h7 >>> 24), (byte)(h7 >>> 16), (byte)(h7 >>> 8), (byte) h7
};
}
protected void resetContext() {
// magic SHA-256 initialisation constants
h0 = 0x6a09e667;
h1 = 0xbb67ae85;
h2 = 0x3c6ef372;
h3 = 0xa54ff53a;
h4 = 0x510e527f;
h5 = 0x9b05688c;
h6 = 0x1f83d9ab;
h7 = 0x5be0cd19;
}
public boolean selfTest() {
if (valid == null) {
Sha256Standalone md = new Sha256Standalone();
md.update((byte) 0x61); // a
md.update((byte) 0x62); // b
md.update((byte) 0x63); // c
String result = "broken"; //Util.toString(md.digest());
valid = new Boolean(DIGEST0.equals(result));
}
return valid.booleanValue();
}
// SHA specific methods ----------------------------------------------------
private static final synchronized void
sha(int hh0, int hh1, int hh2, int hh3, int hh4, int hh5, int hh6, int hh7, byte[] in, int offset, int out[]) {
int A = hh0;
int B = hh1;
int C = hh2;
int D = hh3;
int E = hh4;
int F = hh5;
int G = hh6;
int H = hh7;
int r, T, T2;
for (r = 0; r < 16; r++) {
w[r] = in[offset++] << 24 |
(in[offset++] & 0xFF) << 16 |
(in[offset++] & 0xFF) << 8 |
(in[offset++] & 0xFF);
}
for (r = 16; r < 64; r++) {
T = w[r - 2];
T2 = w[r - 15];
w[r] = (((T >>> 17) | (T << 15)) ^ ((T >>> 19) | (T << 13)) ^ (T >>> 10)) + w[r - 7] + (((T2 >>> 7) | (T2 << 25)) ^ ((T2 >>> 18) | (T2 << 14)) ^ (T2 >>> 3)) + w[r - 16];
}
for (r = 0; r < 64; r++) {
T = H + (((E >>> 6) | (E << 26)) ^ ((E >>> 11) | (E << 21)) ^ ((E >>> 25) | (E << 7))) + ((E & F) ^ (~E & G)) + k[r] + w[r];
T2 = (((A >>> 2) | (A << 30)) ^ ((A >>> 13) | (A << 19)) ^ ((A >>> 22) | (A << 10))) + ((A & B) ^ (A & C) ^ (B & C));
H = G;
G = F;
F = E;
E = D + T;
D = C;
C = B;
B = A;
A = T + T2;
}
/*
return new int[] {
hh0 + A, hh1 + B, hh2 + C, hh3 + D, hh4 + E, hh5 + F, hh6 + G, hh7 + H
};
*/
out[0] = hh0 + A;
out[1] = hh1 + B;
out[2] = hh2 + C;
out[3] = hh3 + D;
out[4] = hh4 + E;
out[5] = hh5 + F;
out[6] = hh6 + G;
out[7] = hh7 + H;
}
}

View File

@ -0,0 +1,175 @@
package gnu.crypto.prng;
import java.util.*;
/**
* fortuna instance that tries to avoid blocking if at all possible by using separate
* filled buffer segments rather than one buffer (and blocking when that buffer's data
* has been eaten)
*/
public class AsyncFortunaStandalone extends FortunaStandalone implements Runnable {
private static final int BUFFERS = 16;
private static final int BUFSIZE = 256*1024;
private final byte asyncBuffers[][] = new byte[BUFFERS][BUFSIZE];
private final int status[] = new int[BUFFERS];
private int nextBuf = 0;
private static final int STATUS_NEED_FILL = 0;
private static final int STATUS_FILLING = 1;
private static final int STATUS_FILLED = 2;
private static final int STATUS_LIVE = 3;
public AsyncFortunaStandalone() {
super();
for (int i = 0; i < BUFFERS; i++)
status[i] = STATUS_NEED_FILL;
}
public void startup() {
Thread refillThread = new Thread(this, "PRNG");
refillThread.setDaemon(true);
refillThread.setPriority(Thread.MIN_PRIORITY+1);
refillThread.start();
}
/** the seed is only propogated once the prng is started with startup() */
public void seed(byte val[]) {
Map props = new HashMap(1);
props.put(SEED, (Object)val);
init(props);
//fillBlock();
}
protected void allocBuffer() {}
/**
* make the next available filled buffer current, scheduling any unfilled
* buffers for refill, and blocking until at least one buffer is ready
*/
protected void rotateBuffer() {
synchronized (asyncBuffers) {
// wait until we get some filled
long before = System.currentTimeMillis();
long waited = 0;
while (status[nextBuf] != STATUS_FILLED) {
System.out.println(Thread.currentThread().getName() + ": Next PRNG buffer "
+ nextBuf + " isn't ready (" + status[nextBuf] + ")");
//new Exception("source").printStackTrace();
asyncBuffers.notifyAll();
try {
asyncBuffers.wait();
} catch (InterruptedException ie) {}
waited = System.currentTimeMillis()-before;
}
if (waited > 0)
System.out.println(Thread.currentThread().getName() + ": Took " + waited
+ "ms for a full PRNG buffer to be found");
//System.out.println(Thread.currentThread().getName() + ": Switching to prng buffer " + nextBuf);
buffer = asyncBuffers[nextBuf];
status[nextBuf] = STATUS_LIVE;
int prev=nextBuf-1;
if (prev<0)
prev = BUFFERS-1;
if (status[prev] == STATUS_LIVE)
status[prev] = STATUS_NEED_FILL;
nextBuf++;
if (nextBuf >= BUFFERS)
nextBuf = 0;
asyncBuffers.notify();
}
}
public void run() {
while (true) {
int toFill = -1;
try {
synchronized (asyncBuffers) {
for (int i = 0; i < BUFFERS; i++) {
if (status[i] == STATUS_NEED_FILL) {
status[i] = STATUS_FILLING;
toFill = i;
break;
}
}
if (toFill == -1) {
//System.out.println(Thread.currentThread().getName() + ": All pending buffers full");
asyncBuffers.wait();
}
}
} catch (InterruptedException ie) {}
if (toFill != -1) {
//System.out.println(Thread.currentThread().getName() + ": Filling prng buffer " + toFill);
long before = System.currentTimeMillis();
doFill(asyncBuffers[toFill]);
long after = System.currentTimeMillis();
synchronized (asyncBuffers) {
status[toFill] = STATUS_FILLED;
//System.out.println(Thread.currentThread().getName() + ": Prng buffer " + toFill + " filled after " + (after-before));
asyncBuffers.notifyAll();
}
Thread.yield();
long waitTime = (after-before)*5;
if (waitTime <= 0) // somehow postman saw waitTime show up as negative
waitTime = 50;
try { Thread.sleep(waitTime); } catch (InterruptedException ie) {}
}
}
}
public void fillBlock()
{
rotateBuffer();
}
private void doFill(byte buf[]) {
long start = System.currentTimeMillis();
if (pool0Count >= MIN_POOL_SIZE
&& System.currentTimeMillis() - lastReseed > 100)
{
reseedCount++;
//byte[] seed = new byte[0];
for (int i = 0; i < NUM_POOLS; i++)
{
if (reseedCount % (1 << i) == 0) {
generator.addRandomBytes(pools[i].digest());
}
}
lastReseed = System.currentTimeMillis();
}
generator.nextBytes(buf);
long now = System.currentTimeMillis();
long diff = now-lastRefill;
lastRefill = now;
long refillTime = now-start;
//System.out.println("Refilling " + (++refillCount) + " after " + diff + " for the PRNG took " + refillTime);
}
public static void main(String args[]) {
try {
AsyncFortunaStandalone rand = new AsyncFortunaStandalone();
byte seed[] = new byte[1024];
rand.seed(seed);
System.out.println("Before starting prng");
rand.startup();
System.out.println("Starting prng, waiting 1 minute");
try { Thread.sleep(60*1000); } catch (InterruptedException ie) {}
System.out.println("PRNG started, beginning test");
long before = System.currentTimeMillis();
byte buf[] = new byte[1024];
java.io.ByteArrayOutputStream baos = new java.io.ByteArrayOutputStream();
java.util.zip.GZIPOutputStream gos = new java.util.zip.GZIPOutputStream(baos);
for (int i = 0; i < 1024; i++) {
rand.nextBytes(buf);
gos.write(buf);
}
long after = System.currentTimeMillis();
gos.finish();
byte compressed[] = baos.toByteArray();
System.out.println("Compressed size of 1MB: " + compressed.length + " took " + (after-before));
} catch (Exception e) { e.printStackTrace(); }
try { Thread.sleep(5*60*1000); } catch (InterruptedException ie) {}
}
}

View File

@ -47,9 +47,9 @@ import java.util.Map;
* <p>An abstract class to facilitate implementing PRNG algorithms.</p>
*
* Modified slightly by jrandom for I2P (removing unneeded exceptions)
* @version $Revision: 1.12 $
* @version $Revision: 1.1 $
*/
public abstract class BasePRNG implements IRandom {
public abstract class BasePRNGStandalone implements IRandomStandalone {
// Constants and variables
// -------------------------------------------------------------------------
@ -74,7 +74,7 @@ public abstract class BasePRNG implements IRandom {
*
* @param name the canonical name of this instance.
*/
protected BasePRNG(String name) {
protected BasePRNGStandalone(String name) {
super();
this.name = name;
@ -88,7 +88,7 @@ public abstract class BasePRNG implements IRandom {
// Instance methods
// -------------------------------------------------------------------------
// IRandom interface implementation ----------------------------------------
// IRandomStandalone interface implementation ----------------------------------------
public String name() {
return name;

View File

@ -54,7 +54,7 @@ import java.util.Iterator;
import java.util.Map;
import java.util.HashMap;
import org.bouncycastle.crypto.digests.SHA256Digest;
import gnu.crypto.hash.Sha256Standalone;
import net.i2p.crypto.CryptixRijndael_Algorithm;
import net.i2p.crypto.CryptixAESKeyCache;
@ -91,38 +91,44 @@ import net.i2p.crypto.CryptixAESKeyCache;
* Bruce Schneier). ISBN 0-471-22357-3.</li>
* </ul>
*
* Modified by jrandom for I2P to use Bouncycastle's SHA256, Cryptix's AES,
* Modified by jrandom for I2P to use a standalone gnu-crypto SHA256, Cryptix's AES,
* to strip out some unnecessary dependencies and increase the buffer size.
* Renamed from Fortuna to FortunaStandalone so it doesn't conflict with the
* gnu-crypto implementation, which has been imported into GNU/classpath
*
*/
public class Fortuna extends BasePRNG
implements Serializable, RandomEventListener
public class FortunaStandalone extends BasePRNGStandalone implements Serializable, RandomEventListenerStandalone
{
private static final long serialVersionUID = 0xFACADE;
private static final int SEED_FILE_SIZE = 64;
private static final int NUM_POOLS = 32;
private static final int MIN_POOL_SIZE = 64;
private final Generator generator;
private final SHA256Digest[] pools;
private long lastReseed;
private int pool;
private int pool0Count;
private int reseedCount;
static final int NUM_POOLS = 32;
static final int MIN_POOL_SIZE = 64;
final Generator generator;
final Sha256Standalone[] pools;
long lastReseed;
int pool;
int pool0Count;
int reseedCount;
static long refillCount = 0;
static long lastRefill = System.currentTimeMillis();
public static final String SEED = "gnu.crypto.prng.fortuna.seed";
public Fortuna()
public FortunaStandalone()
{
super("Fortuna i2p");
generator = new Generator();
pools = new SHA256Digest[NUM_POOLS];
pools = new Sha256Standalone[NUM_POOLS];
for (int i = 0; i < NUM_POOLS; i++)
pools[i] = new SHA256Digest();
pools[i] = new Sha256Standalone();
lastReseed = 0;
pool = 0;
pool0Count = 0;
allocBuffer();
}
protected void allocBuffer() {
buffer = new byte[4*1024*1024]; //256]; // larger buffer to reduce churn
}
@ -142,10 +148,9 @@ public class Fortuna extends BasePRNG
generator.init(attributes);
}
/** fillBlock is not thread safe, so will be locked anyway */
private byte fillBlockBuf[] = new byte[32];
public void fillBlock()
{
long start = System.currentTimeMillis();
if (pool0Count >= MIN_POOL_SIZE
&& System.currentTimeMillis() - lastReseed > 100)
{
@ -154,14 +159,17 @@ public class Fortuna extends BasePRNG
for (int i = 0; i < NUM_POOLS; i++)
{
if (reseedCount % (1 << i) == 0) {
byte buf[] = fillBlockBuf;//new byte[32];
pools[i].doFinal(buf, 0);
generator.addRandomBytes(buf);//pools[i].digest());
generator.addRandomBytes(pools[i].digest());
}
}
lastReseed = System.currentTimeMillis();
}
generator.nextBytes(buffer);
long now = System.currentTimeMillis();
long diff = now-lastRefill;
lastRefill = now;
long refillTime = now-start;
System.out.println("Refilling " + (++refillCount) + " after " + diff + " for the PRNG took " + refillTime);
}
public void addRandomByte(byte b)
@ -180,7 +188,7 @@ public class Fortuna extends BasePRNG
pool = (pool + 1) % NUM_POOLS;
}
public void addRandomEvent(RandomEvent event)
public void addRandomEvent(RandomEventStandalone event)
{
if (event.getPoolNumber() < 0 || event.getPoolNumber() >= pools.length)
throw new IllegalArgumentException("pool number out of range: "
@ -215,12 +223,12 @@ public class Fortuna extends BasePRNG
* right; Fortuna itself is basically a wrapper around this generator
* that manages reseeding in a secure way.
*/
public static class Generator extends BasePRNG implements Cloneable
public static class Generator extends BasePRNGStandalone implements Cloneable
{
private static final int LIMIT = 1 << 20;
private final SHA256Digest hash;
private final Sha256Standalone hash;
private final byte[] counter;
private final byte[] key;
/** current encryption key built from the keying material */
@ -231,7 +239,7 @@ public class Fortuna extends BasePRNG
public Generator ()
{
super("Fortuna.generator.i2p");
this.hash = new SHA256Digest();
this.hash = new Sha256Standalone();
counter = new byte[16]; //cipher.defaultBlockSize()];
buffer = new byte[16]; //cipher.defaultBlockSize()];
int keysize = 32;
@ -284,9 +292,9 @@ public class Fortuna extends BasePRNG
{
hash.update(key, 0, key.length);
hash.update(seed, offset, length);
//byte[] newkey = hash.digest();
//System.arraycopy(newkey, 0, key, 0, Math.min(key.length, newkey.length));
hash.doFinal(key, 0);
byte[] newkey = hash.digest();
System.arraycopy(newkey, 0, key, 0, Math.min(key.length, newkey.length));
//hash.doFinal(key, 0);
resetKey();
incrementCounter();
seeded = true;
@ -341,7 +349,35 @@ public class Fortuna extends BasePRNG
}
public static void main(String args[]) {
Fortuna f = new Fortuna();
byte in[] = new byte[16];
byte out[] = new byte[16];
byte key[] = new byte[32];
try {
CryptixAESKeyCache.KeyCacheEntry buf = CryptixAESKeyCache.createNew();
Object cryptixKey = CryptixRijndael_Algorithm.makeKey(key, 16, buf);
long beforeAll = System.currentTimeMillis();
for (int i = 0; i < 256; i++) {
//long before =System.currentTimeMillis();
for (int j = 0; j < 1024; j++)
CryptixRijndael_Algorithm.blockEncrypt(in, out, 0, 0, cryptixKey);
//long after = System.currentTimeMillis();
//System.out.println("encrypting 16KB took " + (after-before));
}
long after = System.currentTimeMillis();
System.out.println("encrypting 4MB took " + (after-beforeAll));
} catch (Exception e) { e.printStackTrace(); }
try {
CryptixAESKeyCache.KeyCacheEntry buf = CryptixAESKeyCache.createNew();
Object cryptixKey = CryptixRijndael_Algorithm.makeKey(key, 16, buf);
byte data[] = new byte[4*1024*1024];
long beforeAll = System.currentTimeMillis();
//CryptixRijndael_Algorithm.ecbBulkEncrypt(data, data, cryptixKey);
long after = System.currentTimeMillis();
System.out.println("encrypting 4MB took " + (after-beforeAll));
} catch (Exception e) { e.printStackTrace(); }
/*
FortunaStandalone f = new FortunaStandalone();
java.util.HashMap props = new java.util.HashMap();
byte initSeed[] = new byte[1234];
new java.util.Random().nextBytes(initSeed);
@ -354,5 +390,6 @@ public class Fortuna extends BasePRNG
}
long time = System.currentTimeMillis() - before;
System.out.println("512MB took " + time + ", or " + (8*64d)/((double)time/1000d) +"MBps");
*/
}
}

View File

@ -1,7 +1,7 @@
package gnu.crypto.prng;
// ----------------------------------------------------------------------------
// $Id: IRandom.java,v 1.12 2005/10/06 04:24:17 rsdio Exp $
// $Id: IRandomStandalone.java,v 1.1 2005/10/22 13:10:00 jrandom Exp $
//
// Copyright (C) 2001, 2002, 2003 Free Software Foundation, Inc.
//
@ -82,9 +82,9 @@ import java.util.Map;
* Menezes, A., van Oorschot, P. and S. Vanstone.</li>
* </ol>
*
* @version $Revision: 1.12 $
* @version $Revision: 1.1 $
*/
public interface IRandom extends Cloneable {
public interface IRandomStandalone extends Cloneable {
// Constants
// -------------------------------------------------------------------------
@ -110,32 +110,32 @@ public interface IRandom extends Cloneable {
void init(Map attributes);
/**
* <p>Returns the next 8 bits of random data generated from this instance.</p>
*
* @return the next 8 bits of random data generated from this instance.
* @exception IllegalStateException if the instance is not yet initialised.
* @exception LimitReachedException if this instance has reached its
* theoretical limit for generating non-repetitive pseudo-random data.
*/
byte nextByte() throws IllegalStateException, LimitReachedException;
* <p>Returns the next 8 bits of random data generated from this instance.</p>
*
* @return the next 8 bits of random data generated from this instance.
* @exception IllegalStateException if the instance is not yet initialised.
* @exception LimLimitReachedExceptionStandalone this instance has reached its
* theoretical limit for generating non-repetitive pseudo-random data.
*/
byte nextByte() throws IllegalStateException, LimitReachedExceptionStandalone;
/**
* <p>Fills the designated byte array, starting from byte at index
* <code>offset</code>, for a maximum of <code>length</code> bytes with the
* output of this generator instance.
*
* @param out the placeholder to contain the generated random bytes.
* @param offset the starting index in <i>out</i> to consider. This method
* does nothing if this parameter is not within <code>0</code> and
* <code>out.length</code>.
* @param length the maximum number of required random bytes. This method
* does nothing if this parameter is less than <code>1</code>.
* @exception IllegalStateException if the instance is not yet initialised.
* @exception LimitReachedException if this instance has reached its
* theoretical limit for generating non-repetitive pseudo-random data.
*/
* <p>Fills the designated byte array, starting from byte at index
* <code>offset</code>, for a maximum of <code>length</code> bytes with the
* output of this generator instance.
*
* @param out the placeholder to contain the generated random bytes.
* @param offset the starting index in <i>out</i> to consider. This method
* does nothing if this parameter is not within <code>0</code> and
* <code>out.length</code>.
* @param length the maximum number of required random bytes. This method
* does nothing if this parameter is less than <code>1</code>.
* @exception IllegalStateException if the instance is not yet initialised.
* @exception LimitLimitReachedExceptionStandalonehis instance has reached its
* theoretical limit for generating non-repetitive pseudo-random data.
*/
void nextBytes(byte[] out, int offset, int length)
throws IllegalStateException, LimitReachedException;
throws IllegalStateException, LimitReachedExceptionStandalone;
/**
* <p>Supplement, or possibly replace, the random state of this PRNG with

View File

@ -1,7 +1,7 @@
package gnu.crypto.prng;
// ----------------------------------------------------------------------------
// $Id: LimitReachedException.java,v 1.5 2005/10/06 04:24:17 rsdio Exp $
// $Id: LimitReachedExceptionStandalone.java,v 1.1 2005/10/22 13:10:00 jrandom Exp $
//
// Copyright (C) 2001, 2002, Free Software Foundation, Inc.
//
@ -47,9 +47,9 @@ package gnu.crypto.prng;
* A checked exception that indicates that a pseudo random number generated has
* reached its theoretical limit in generating random bytes.
*
* @version $Revision: 1.5 $
* @version $Revision: 1.1 $
*/
public class LimitReachedException extends Exception {
public class LimitReachedExceptionStandalone extends Exception {
// Constants and variables
// -------------------------------------------------------------------------
@ -57,11 +57,11 @@ public class LimitReachedException extends Exception {
// Constructor(s)
// -------------------------------------------------------------------------
public LimitReachedException() {
public LimitReachedExceptionStandalone() {
super();
}
public LimitReachedException(String msg) {
public LimitReachedExceptionStandalone(String msg) {
super(msg);
}

View File

@ -1,4 +1,4 @@
/* RandomEventListener.java -- event listener
/* RandomEventListenerStandalone.java -- event listener
Copyright (C) 2004 Free Software Foundation, Inc.
This file is part of GNU Crypto.
@ -47,7 +47,7 @@ import java.util.EventListener;
* An interface for entropy accumulators that will be notified of random
* events.
*/
public interface RandomEventListener extends EventListener
public interface RandomEventListenerStandalone extends EventListener
{
void addRandomEvent(RandomEvent event);
void addRandomEvent(RandomEventStandalone event);
}

View File

@ -1,4 +1,4 @@
/* RandomEvent.java -- a random event.
/* RandomEventStandalone.java -- a random event.
Copyright (C) 2004 Free Software Foundation, Inc.
This file is part of GNU Crypto.
@ -47,14 +47,14 @@ import java.util.EventObject;
* An interface for entropy accumulators that will be notified of random
* events.
*/
public class RandomEvent extends EventObject
public class RandomEventStandalone extends EventObject
{
private final byte sourceNumber;
private final byte poolNumber;
private final byte[] data;
public RandomEvent(Object source, byte sourceNumber, byte poolNumber,
public RandomEventStandalone(Object source, byte sourceNumber, byte poolNumber,
byte[] data)
{
super(source);

View File

@ -14,8 +14,8 @@ package net.i2p;
*
*/
public class CoreVersion {
public final static String ID = "$Revision: 1.52 $ $Date: 2006/01/12 16:18:38 $";
public final static String VERSION = "0.6.1.10";
public final static String ID = "$Revision: 1.66 $ $Date: 2006-07-27 22:35:02 $";
public final static String VERSION = "0.6.1.24";
public static void main(String args[]) {
System.out.println("I2P Core version: " + VERSION);

View File

@ -11,11 +11,10 @@ import net.i2p.crypto.CryptixAESEngine;
import net.i2p.crypto.DSAEngine;
import net.i2p.crypto.DummyDSAEngine;
import net.i2p.crypto.DummyElGamalEngine;
import net.i2p.crypto.DummyHMACSHA256Generator;
import net.i2p.crypto.DummyPooledRandomSource;
import net.i2p.crypto.ElGamalAESEngine;
import net.i2p.crypto.ElGamalEngine;
import net.i2p.crypto.HMACSHA256Generator;
import net.i2p.crypto.HMACGenerator;
import net.i2p.crypto.KeyGenerator;
import net.i2p.crypto.PersistentSessionKeyManager;
import net.i2p.crypto.SHA256Generator;
@ -67,7 +66,7 @@ public class I2PAppContext {
private ElGamalAESEngine _elGamalAESEngine;
private AESEngine _AESEngine;
private LogManager _logManager;
private HMACSHA256Generator _hmac;
private HMACGenerator _hmac;
private SHA256Generator _sha;
private Clock _clock;
private DSAEngine _dsa;
@ -342,17 +341,14 @@ public class I2PAppContext {
* other than for consistency, and perhaps later we'll want to
* include some stats.
*/
public HMACSHA256Generator hmac() {
public HMACGenerator hmac() {
if (!_hmacInitialized) initializeHMAC();
return _hmac;
}
private void initializeHMAC() {
synchronized (this) {
if (_hmac == null) {
if ("true".equals(getProperty("i2p.fakeHMAC", "false")))
_hmac = new DummyHMACSHA256Generator(this);
else
_hmac= new HMACSHA256Generator(this);
_hmac= new HMACGenerator(this);
}
_hmacInitialized = true;
}

View File

@ -140,6 +140,7 @@ abstract class I2PSessionImpl implements I2PSession, I2CPMessageReader.I2CPMessa
loadConfig(options);
_sessionId = null;
_leaseSet = null;
_context.statManager().createRateStat("client.availableMessages", "How many messages are available for the current client", "ClientMessages", new long[] { 60*1000, 10*60*1000 });
}
/**
@ -299,11 +300,17 @@ abstract class I2PSessionImpl implements I2PSession, I2CPMessageReader.I2CPMessa
*
*/
public byte[] receiveMessage(int msgId) throws I2PSessionException {
int remaining = 0;
MessagePayloadMessage msg = null;
synchronized (_availableMessages) {
msg = (MessagePayloadMessage) _availableMessages.remove(new Long(msgId));
remaining = _availableMessages.size();
}
_context.statManager().addRateData("client.availableMessages", remaining, 0);
if (msg == null) {
_log.error("Receive message " + msgId + " had no matches, remaining=" + remaining);
return null;
}
if (msg == null) return null;
return msg.getPayload().getUnencryptedData();
}
@ -339,9 +346,13 @@ abstract class I2PSessionImpl implements I2PSession, I2CPMessageReader.I2CPMessa
* Recieve a payload message and let the app know its available
*/
public void addNewMessage(MessagePayloadMessage msg) {
Long mid = new Long(msg.getMessageId());
int avail = 0;
synchronized (_availableMessages) {
_availableMessages.put(new Long(msg.getMessageId()), msg);
_availableMessages.put(mid, msg);
avail = _availableMessages.size();
}
_context.statManager().addRateData("client.availableMessages", avail, 0);
long id = msg.getMessageId();
byte data[] = msg.getPayload().getUnencryptedData();
if ((data == null) || (data.length <= 0)) {
@ -354,20 +365,23 @@ abstract class I2PSessionImpl implements I2PSession, I2CPMessageReader.I2CPMessa
if (_log.shouldLog(Log.INFO))
_log.info(getPrefix() + "Notified availability for session " + _sessionId + ", message " + id);
}
SimpleTimer.getInstance().addEvent(new VerifyUsage(id), 30*1000);
SimpleTimer.getInstance().addEvent(new VerifyUsage(mid), 30*1000);
}
private class VerifyUsage implements SimpleTimer.TimedEvent {
private long _msgId;
public VerifyUsage(long id) { _msgId = id; }
private Long _msgId;
public VerifyUsage(Long id) { _msgId = id; }
public void timeReached() {
MessagePayloadMessage removed = null;
int remaining = 0;
synchronized (_availableMessages) {
removed = (MessagePayloadMessage)_availableMessages.remove(new Long(_msgId));
removed = (MessagePayloadMessage)_availableMessages.remove(_msgId);
remaining = _availableMessages.size();
}
if (removed != null) {
_log.log(Log.CRIT, "Message NOT removed! id=" + _msgId + ": " + removed + ": remaining: " + remaining);
_context.statManager().addRateData("client.availableMessages", remaining, 0);
}
if (removed != null)
_log.log(Log.CRIT, "Message NOT removed! id=" + _msgId + ": " + removed);
}
}
private class AvailabilityNotifier implements Runnable {

View File

@ -153,7 +153,6 @@ public class AESEngine {
/** decrypt the data with the session key provided
* @param payload encrypted data
* @param sessionKey private session key
* @return unencrypted data
*/
public void decryptBlock(byte payload[], int inIndex, SessionKey sessionKey, byte rv[], int outIndex) {
System.arraycopy(payload, inIndex, rv, outIndex, rv.length - outIndex);

View File

@ -139,7 +139,6 @@ public class CryptixAESEngine extends AESEngine {
/** decrypt the data with the session key provided
* @param payload encrypted data
* @param sessionKey private session key
* @return unencrypted data
*/
public final void decryptBlock(byte payload[], int inIndex, SessionKey sessionKey, byte rv[], int outIndex) {
if ( (payload == null) || (rv == null) )

View File

@ -320,9 +320,9 @@ public class DHSessionKeyBuilder {
if (_myPrivateValue == null) generateMyValue();
_sessionKey = calculateSessionKey(_myPrivateValue, _peerValue);
} else {
System.err.println("Not ready yet.. privateValue and peerValue must be set ("
+ (_myPrivateValue != null ? "set" : "null") + ","
+ (_peerValue != null ? "set" : "null") + ")");
//System.err.println("Not ready yet.. privateValue and peerValue must be set ("
// + (_myPrivateValue != null ? "set" : "null") + ","
// + (_peerValue != null ? "set" : "null") + ")");
}
return _sessionKey;
}

View File

@ -1,52 +0,0 @@
package net.i2p.crypto;
import java.util.Arrays;
import java.util.ArrayList;
import java.util.List;
import net.i2p.I2PAppContext;
import net.i2p.data.DataHelper;
import net.i2p.data.Hash;
import net.i2p.data.SessionKey;
/**
* Calculate the HMAC-SHA256 of a key+message. All the good stuff occurs
* in {@link org.bouncycastle.crypto.macs.HMac} and
* {@link org.bouncycastle.crypto.digests.SHA256Digest}.
*
*/
public class DummyHMACSHA256Generator extends HMACSHA256Generator {
private I2PAppContext _context;
public DummyHMACSHA256Generator(I2PAppContext context) {
super(context);
_context = context;
}
public static HMACSHA256Generator getInstance() {
return I2PAppContext.getGlobalContext().hmac();
}
/**
* Calculate the HMAC of the data with the given key
*/
public Hash calculate(SessionKey key, byte data[]) {
if ((key == null) || (key.getData() == null) || (data == null))
throw new NullPointerException("Null arguments for HMAC");
return calculate(key, data, 0, data.length);
}
/**
* Calculate the HMAC of the data with the given key
*/
public Hash calculate(SessionKey key, byte data[], int offset, int length) {
if ((key == null) || (key.getData() == null) || (data == null))
throw new NullPointerException("Null arguments for HMAC");
byte rv[] = new byte[Hash.HASH_LENGTH];
System.arraycopy(key.getData(), 0, rv, 0, Hash.HASH_LENGTH);
if (Hash.HASH_LENGTH >= length)
DataHelper.xor(data, offset, rv, 0, rv, 0, length);
else
DataHelper.xor(data, offset, rv, 0, rv, 0, Hash.HASH_LENGTH);
return new Hash(rv);
}
}

View File

@ -283,11 +283,12 @@ public class ElGamalAESEngine {
try {
SessionKey newKey = null;
Hash readHash = null;
List tags = new ArrayList();
List tags = null;
//ByteArrayInputStream bais = new ByteArrayInputStream(decrypted);
int cur = 0;
long numTags = DataHelper.fromLong(decrypted, cur, 2);
if (numTags > 0) tags = new ArrayList((int)numTags);
cur += 2;
//_log.debug("# tags: " + numTags);
if ((numTags < 0) || (numTags > 200)) throw new Exception("Invalid number of session tags");
@ -326,7 +327,8 @@ public class ElGamalAESEngine {
if (eq) {
// everything matches. w00t.
foundTags.addAll(tags);
if (tags != null)
foundTags.addAll(tags);
if (newKey != null) foundKey.setData(newKey.getData());
return unencrData;
}
@ -610,4 +612,4 @@ public class ElGamalAESEngine {
}
}
}
}
}

View File

@ -129,6 +129,7 @@ public class ElGamalEngine {
(ybytes.length > 257 ? 257 : ybytes.length));
System.arraycopy(dbytes, 0, out, (dbytes.length < 257 ? 514 - dbytes.length : 257),
(dbytes.length > 257 ? 257 : dbytes.length));
/*
StringBuffer buf = new StringBuffer(1024);
buf.append("Timing\n");
buf.append("0-1: ").append(t1 - t0).append('\n');
@ -142,6 +143,7 @@ public class ElGamalEngine {
buf.append("8-9: ").append(t9 - t8).append('\n');
buf.append("9-10: ").append(t10 - t9).append('\n');
//_log.debug(buf.toString());
*/
long end = _context.clock().now();
long diff = end - start;

View File

@ -8,46 +8,26 @@ import net.i2p.data.DataHelper;
import net.i2p.data.Hash;
import net.i2p.data.SessionKey;
import org.bouncycastle.crypto.digests.SHA256Digest;
import org.bouncycastle.crypto.digests.MD5Digest;
import org.bouncycastle.crypto.macs.HMac;
/**
* Calculate the HMAC-SHA256 of a key+message. All the good stuff occurs
* Calculate the HMAC-MD5 of a key+message. All the good stuff occurs
* in {@link org.bouncycastle.crypto.macs.HMac} and
* {@link org.bouncycastle.crypto.digests.SHA256Digest}. Alternately, if
* the context property "i2p.HMACMD5" is set to true, then this whole HMAC
* generator will be transformed into HMACMD5, maintaining the same size and
* using {@link org.bouncycastle.crypto.digests.MD5Digest}.
* {@link org.bouncycastle.crypto.digests.MD5Digest}.
*
*/
public class HMACSHA256Generator {
public class HMACGenerator {
private I2PAppContext _context;
/** set of available HMAC instances for calculate */
private List _available;
/** set of available byte[] buffers for verify */
private List _availableTmp;
private boolean _useMD5;
private int _macSize;
public static final boolean DEFAULT_USE_MD5 = true;
public HMACSHA256Generator(I2PAppContext context) {
public HMACGenerator(I2PAppContext context) {
_context = context;
_available = new ArrayList(32);
_availableTmp = new ArrayList(32);
if ("true".equals(context.getProperty("i2p.HMACMD5", Boolean.toString(DEFAULT_USE_MD5).toLowerCase())))
_useMD5 = true;
else
_useMD5 = false;
if ("true".equals(context.getProperty("i2p.HMACBrokenSize", "false")))
_macSize = 32;
else
_macSize = (_useMD5 ? 16 : 32);
}
public static HMACSHA256Generator getInstance() {
return I2PAppContext.getGlobalContext().hmac();
}
/**
@ -61,24 +41,6 @@ public class HMACSHA256Generator {
return new Hash(rv);
}
/**
* Calculate the HMAC of the data with the given key
*/
/*
public Hash calculate(SessionKey key, byte data[], int offset, int length) {
if ((key == null) || (key.getData() == null) || (data == null))
throw new NullPointerException("Null arguments for HMAC");
HMac mac = acquire();
mac.init(key.getData());
mac.update(data, offset, length);
byte rv[] = new byte[Hash.HASH_LENGTH];
mac.doFinal(rv, 0);
release(mac);
return new Hash(rv);
}
*/
/**
* Calculate the HMAC of the data with the given key
*/
@ -131,10 +93,7 @@ public class HMACSHA256Generator {
// the HMAC is hardcoded to use SHA256 digest size
// for backwards compatability. next time we have a backwards
// incompatible change, we should update this by removing ", 32"
if (_useMD5)
return new HMac(new MD5Digest(), 32);
else
return new HMac(new SHA256Digest(), 32);
return new HMac(new MD5Digest(), 32);
}
private void release(HMac mac) {
synchronized (_available) {

View File

@ -7,17 +7,19 @@ import net.i2p.I2PAppContext;
import net.i2p.data.Base64;
import net.i2p.data.Hash;
import org.bouncycastle.crypto.digests.SHA256Digest;
import gnu.crypto.hash.Sha256Standalone;
/**
* Defines a wrapper for SHA-256 operation. All the good stuff occurs
* in the Bouncycastle {@link org.bouncycastle.crypto.digests.SHA256Digest}
* in the GNU-Crypto {@link gnu.crypto.hash.Sha256Standalone}
*
*/
public final class SHA256Generator {
private List _digests;
private List _digestsGnu;
public SHA256Generator(I2PAppContext context) {
_digests = new ArrayList(32);
_digestsGnu = new ArrayList(32);
}
public static final SHA256Generator getInstance() {
@ -32,47 +34,44 @@ public final class SHA256Generator {
return calculateHash(source, 0, source.length);
}
public final Hash calculateHash(byte[] source, int start, int len) {
byte rv[] = new byte[Hash.HASH_LENGTH];
calculateHash(source, start, len, rv, 0);
Sha256Standalone digest = acquireGnu();
digest.update(source, start, len);
byte rv[] = digest.digest();
releaseGnu(digest);
return new Hash(rv);
}
public final void calculateHash(byte[] source, int start, int len, byte out[], int outOffset) {
SHA256Digest digest = acquire();
Sha256Standalone digest = acquireGnu();
digest.update(source, start, len);
digest.doFinal(out, outOffset);
release(digest);
byte rv[] = digest.digest();
releaseGnu(digest);
System.arraycopy(rv, 0, out, outOffset, rv.length);
}
private SHA256Digest acquire() {
SHA256Digest rv = null;
synchronized (_digests) {
if (_digests.size() > 0)
rv = (SHA256Digest)_digests.remove(0);
private Sha256Standalone acquireGnu() {
Sha256Standalone rv = null;
synchronized (_digestsGnu) {
if (_digestsGnu.size() > 0)
rv = (Sha256Standalone)_digestsGnu.remove(0);
}
if (rv != null)
rv.reset();
else
rv = new SHA256Digest();
rv = new Sha256Standalone();
return rv;
}
private void release(SHA256Digest digest) {
synchronized (_digests) {
if (_digests.size() < 32) {
_digests.add(digest);
private void releaseGnu(Sha256Standalone digest) {
synchronized (_digestsGnu) {
if (_digestsGnu.size() < 32) {
_digestsGnu.add(digest);
}
}
}
public static void main(String args[]) {
I2PAppContext ctx = I2PAppContext.getGlobalContext();
byte orig[] = new byte[4096];
ctx.random().nextBytes(orig);
Hash old = ctx.sha().calculateHash(orig);
SHA256Digest d = new SHA256Digest();
d.update(orig, 0, orig.length);
byte out[] = new byte[Hash.HASH_LENGTH];
d.doFinal(out, 0);
System.out.println("eq? " + net.i2p.data.DataHelper.eq(out, old.getData()));
for (int i = 0; i < args.length; i++)
System.out.println("SHA256 [" + args[i] + "] = [" + Base64.encode(ctx.sha().calculateHash(args[i].getBytes()).getData()) + "]");
}

View File

@ -67,7 +67,7 @@ class TransientSessionKeyManager extends SessionKeyManager {
_log = context.logManager().getLog(TransientSessionKeyManager.class);
_context = context;
_outboundSessions = new HashMap(1024);
_inboundTagSets = new HashMap(64*1024);
_inboundTagSets = new HashMap(1024);
context.statManager().createRateStat("crypto.sessionTagsExpired", "How many tags/sessions are expired?", "Encryption", new long[] { 10*60*1000, 60*60*1000, 3*60*60*1000 });
context.statManager().createRateStat("crypto.sessionTagsRemaining", "How many tags/sessions are remaining after a cleanup?", "Encryption", new long[] { 10*60*1000, 60*60*1000, 3*60*60*1000 });
SimpleTimer.getInstance().addEvent(new CleanupEvent(), 60*1000);

View File

@ -822,6 +822,8 @@ public class DataHelper {
return (ms / (60 * 1000)) + "m";
} else if (ms < 3 * 24 * 60 * 60 * 1000) {
return (ms / (60 * 60 * 1000)) + "h";
} else if (ms > 365l * 24l * 60l * 60l * 1000l) {
return "n/a";
} else {
return (ms / (24 * 60 * 60 * 1000)) + "d";
}

View File

@ -53,6 +53,9 @@ public class RouterInfo extends DataStructureImpl {
public static final String PROP_CAPABILITIES = "caps";
public static final char CAPABILITY_HIDDEN = 'H';
// Public string of chars which serve as bandwidth capacity markers
// NOTE: individual chars defined in Router.java
public static final String BW_CAPABILITY_CHARS = "KLMNO";
public RouterInfo() {
setIdentity(null);
@ -179,6 +182,12 @@ public class RouterInfo extends DataStructureImpl {
return (Properties) _options.clone();
}
}
public String getOption(String opt) {
if (_options == null) return null;
synchronized (_options) {
return _options.getProperty(opt);
}
}
/**
* Configure a set of options or statistics that the router can expose
@ -334,6 +343,24 @@ public class RouterInfo extends DataStructureImpl {
return (getCapabilities().indexOf(CAPABILITY_HIDDEN) != -1);
}
/**
* Return a string representation of this node's bandwidth tier,
* or "Unknown"
*/
public String getBandwidthTier() {
String bwTiers = BW_CAPABILITY_CHARS;
String bwTier = "Unknown";
String capabilities = getCapabilities();
// Iterate through capabilities, searching for known bandwidth tier
for (int i = 0; i < capabilities.length(); i++) {
if (bwTiers.indexOf(String.valueOf(capabilities.charAt(i))) != -1) {
bwTier = String.valueOf(capabilities.charAt(i));
break;
}
}
return (bwTier);
}
public void addCapability(char cap) {
if (_options == null) _options = new OrderedProperties();
synchronized (_options) {

View File

@ -151,15 +151,17 @@ public class I2CPMessageReader {
_log.debug("After handling the newly received message");
}
} catch (I2CPMessageException ime) {
_log.error("Error handling message", ime);
_log.warn("Error handling message", ime);
_listener.readError(I2CPMessageReader.this, ime);
cancelRunner();
} catch (IOException ioe) {
_log.error("IO Error handling message", ioe);
_log.warn("IO Error handling message", ioe);
_listener.disconnected(I2CPMessageReader.this);
cancelRunner();
} catch (Throwable t) {
_log.log(Log.CRIT, "Unhandled error reading I2CP stream", t);
} catch (OutOfMemoryError oom) {
throw oom;
} catch (Exception e) {
_log.log(Log.CRIT, "Unhandled error reading I2CP stream", e);
_listener.disconnected(I2CPMessageReader.this);
cancelRunner();
}

View File

@ -29,6 +29,8 @@ public class BufferedStatLog implements StatLog {
private String _lastFilters;
private BufferedWriter _out;
private String _outFile;
/** short circuit for adding data, set to true if some filters are set, false if its empty (so we can skip the sync) */
private volatile boolean _filtersSpecified;
private static final int BUFFER_SIZE = 1024;
private static final boolean DISABLE_LOGGING = false;
@ -44,6 +46,7 @@ public class BufferedStatLog implements StatLog {
_lastWrite = _events.length-1;
_statFilters = new ArrayList(10);
_flushFrequency = 500;
_filtersSpecified = false;
I2PThread writer = new I2PThread(new StatLogWriter(), "StatLogWriter");
writer.setDaemon(true);
writer.start();
@ -51,6 +54,7 @@ public class BufferedStatLog implements StatLog {
public void addData(String scope, String stat, long value, long duration) {
if (DISABLE_LOGGING) return;
if (!shouldLog(stat)) return;
synchronized (_events) {
_events[_eventNext].init(scope, stat, value, duration);
_eventNext = (_eventNext + 1) % _events.length;
@ -72,6 +76,7 @@ public class BufferedStatLog implements StatLog {
}
private boolean shouldLog(String stat) {
if (!_filtersSpecified) return false;
synchronized (_statFilters) {
return _statFilters.contains(stat) || _statFilters.contains("*");
}
@ -88,11 +93,18 @@ public class BufferedStatLog implements StatLog {
_statFilters.clear();
while (tok.hasMoreTokens())
_statFilters.add(tok.nextToken().trim());
if (_statFilters.size() > 0)
_filtersSpecified = true;
else
_filtersSpecified = false;
}
}
_lastFilters = val;
} else {
synchronized (_statFilters) { _statFilters.clear(); }
synchronized (_statFilters) {
_statFilters.clear();
_filtersSpecified = false;
}
}
String filename = _context.getProperty(StatManager.PROP_STAT_FILE);
@ -146,7 +158,7 @@ public class BufferedStatLog implements StatLog {
updateFilters();
int cur = start;
while (cur != end) {
if (shouldLog(_events[cur].getStat())) {
//if (shouldLog(_events[cur].getStat())) {
String when = null;
synchronized (_fmt) {
when = _fmt.format(new Date(_events[cur].getTime()));
@ -164,7 +176,7 @@ public class BufferedStatLog implements StatLog {
_out.write(" ");
_out.write(Long.toString(_events[cur].getDuration()));
_out.write("\n");
}
//}
cur = (cur + 1) % _events.length;
}
_out.flush();

View File

@ -26,6 +26,8 @@ public class Rate {
private volatile double _lifetimeTotalValue;
private volatile long _lifetimeEventCount;
private volatile long _lifetimeTotalEventTime;
private RateSummaryListener _summaryListener;
private RateStat _stat;
private volatile long _lastCoalesceDate;
private long _creationDate;
@ -108,6 +110,9 @@ public class Rate {
public long getPeriod() {
return _period;
}
public RateStat getRateStat() { return _stat; }
public void setRateStat(RateStat rs) { _stat = rs; }
/**
*
@ -175,22 +180,26 @@ public class Rate {
}
}
/** 2s is plenty of slack to deal with slow coalescing (across many stats) */
private static final int SLACK = 2000;
public void coalesce() {
long now = now();
synchronized (_lock) {
long measuredPeriod = now - _lastCoalesceDate;
if (measuredPeriod < _period) {
// no need to coalesce
if (measuredPeriod < _period - SLACK) {
// no need to coalesce (assuming we only try to do so once per minute)
if (_log.shouldLog(Log.WARN))
_log.warn("not coalescing, measuredPeriod = " + measuredPeriod + " period = " + _period);
return;
}
// ok ok, lets coalesce
// how much were we off by? (so that we can sample down the measured values)
double periodFactor = measuredPeriod / _period;
_lastTotalValue = (_currentTotalValue == 0 ? 0.0D : _currentTotalValue / periodFactor);
_lastEventCount = (_currentEventCount == 0 ? 0L : (long) (_currentEventCount / periodFactor));
_lastTotalEventTime = (_currentTotalEventTime == 0 ? 0L : (long) (_currentTotalEventTime / periodFactor));
double periodFactor = measuredPeriod / (double)_period;
_lastTotalValue = _currentTotalValue / periodFactor;
_lastEventCount = (long) ( (_currentEventCount + periodFactor - 1) / periodFactor);
_lastTotalEventTime = (long) (_currentTotalEventTime / periodFactor);
_lastCoalesceDate = now;
if (_lastTotalValue > _extremeTotalValue) {
@ -203,8 +212,13 @@ public class Rate {
_currentEventCount = 0;
_currentTotalEventTime = 0;
}
if (_summaryListener != null)
_summaryListener.add(_lastTotalValue, _lastEventCount, _lastTotalEventTime, _period);
}
public void setSummaryListener(RateSummaryListener listener) { _summaryListener = listener; }
public RateSummaryListener getSummaryListener() { return _summaryListener; }
/** what was the average value across the events in the last period? */
public double getAverageValue() {
if ((_lastTotalValue != 0) && (_lastEventCount > 0))
@ -237,10 +251,12 @@ public class Rate {
*/
public double getLastEventSaturation() {
if ((_lastEventCount > 0) && (_lastTotalEventTime > 0)) {
double eventTime = (double) _lastTotalEventTime / (double) _lastEventCount;
/*double eventTime = (double) _lastTotalEventTime / (double) _lastEventCount;
double maxEvents = _period / eventTime;
double saturation = _lastEventCount / maxEvents;
return saturation;
*/
return ((double)_lastTotalEventTime) / (double)_period;
}
return 0.0D;
@ -417,6 +433,7 @@ public class Rate {
public boolean equals(Object obj) {
if ((obj == null) || (obj.getClass() != Rate.class)) return false;
if (obj == this) return true;
Rate r = (Rate) obj;
return _period == r.getPeriod() && _creationDate == r.getCreationDate() &&
//_lastCoalesceDate == r.getLastCoalesceDate() &&

View File

@ -27,8 +27,10 @@ public class RateStat {
_description = description;
_groupName = group;
_rates = new Rate[periods.length];
for (int i = 0; i < periods.length; i++)
for (int i = 0; i < periods.length; i++) {
_rates[i] = new Rate(periods[i]);
_rates[i].setRateStat(this);
}
}
public void setStatLog(StatLog sl) { _statLog = sl; }
@ -159,6 +161,7 @@ public class RateStat {
_rates[i].load(props, curPrefix, treatAsCurrent);
} catch (IllegalArgumentException iae) {
_rates[i] = new Rate(period);
_rates[i].setRateStat(this);
if (_log.shouldLog(Log.WARN))
_log.warn("Rate for " + prefix + " is corrupt, reinitializing that period");
}

View File

@ -0,0 +1,14 @@
package net.i2p.stat;
/**
* Receive the state of the rate when its coallesced
*/
public interface RateSummaryListener {
/**
* @param totalValue sum of all event values in the most recent period
* @param eventCount how many events occurred
* @param totalEventTime how long the events were running for
* @param period how long this period is
*/
void add(double totalValue, long eventCount, double totalEventTime, long period);
}

View File

@ -43,7 +43,7 @@ public class StatManager {
_log = context.logManager().getLog(StatManager.class);
_context = context;
_frequencyStats = Collections.synchronizedMap(new HashMap(128));
_rateStats = Collections.synchronizedMap(new HashMap(128));
_rateStats = new HashMap(128); // synchronized only on add //Collections.synchronizedMap(new HashMap(128));
_statLog = new BufferedStatLog(context);
}
@ -80,10 +80,12 @@ public class StatManager {
* @param periods array of period lengths (in milliseconds)
*/
public void createRateStat(String name, String description, String group, long periods[]) {
if (_rateStats.containsKey(name)) return;
RateStat rs = new RateStat(name, description, group, periods);
if (_statLog != null) rs.setStatLog(_statLog);
_rateStats.put(name, rs);
synchronized (_rateStats) {
if (_rateStats.containsKey(name)) return;
RateStat rs = new RateStat(name, description, group, periods);
if (_statLog != null) rs.setStatLog(_statLog);
_rateStats.put(name, rs);
}
}
/** update the given frequency statistic, taking note that an event occurred (and recalculating all frequencies) */
@ -94,7 +96,7 @@ public class StatManager {
/** update the given rate statistic, taking note that the given data point was received (and recalculating all rates) */
public void addRateData(String name, long data, long eventDuration) {
RateStat stat = (RateStat) _rateStats.get(name);
RateStat stat = (RateStat) _rateStats.get(name); // unsynchronized
if (stat != null) stat.addData(data, eventDuration);
}

View File

@ -146,15 +146,15 @@ public class DecayingBloomFilter {
for (int i = 0; i < _extenders.length; i++)
DataHelper.xor(entry, offset, _extenders[i], 0, _extended, _entryBytes * (i+1), _entryBytes);
boolean seen = _current.member(_extended);
seen = seen || _previous.member(_extended);
boolean seen = _current.locked_member(_extended);
seen = seen || _previous.locked_member(_extended);
if (seen) {
_currentDuplicates++;
return true;
} else {
if (addIfNew) {
_current.insert(_extended);
_previous.insert(_extended);
_current.locked_insert(_extended);
_previous.locked_insert(_extended);
}
return false;
}

View File

@ -0,0 +1,51 @@
package net.i2p.util;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.TreeMap;
import net.i2p.I2PAppContext;
class Executor implements Runnable {
private I2PAppContext _context;
private Log _log;
private List _readyEvents;
public Executor(I2PAppContext ctx, Log log, List events) {
_context = ctx;
_readyEvents = events;
}
public void run() {
while (true) {
SimpleTimer.TimedEvent evt = null;
synchronized (_readyEvents) {
if (_readyEvents.size() <= 0)
try { _readyEvents.wait(); } catch (InterruptedException ie) {}
if (_readyEvents.size() > 0)
evt = (SimpleTimer.TimedEvent)_readyEvents.remove(0);
}
if (evt != null) {
long before = _context.clock().now();
try {
evt.timeReached();
} catch (Throwable t) {
log("wtf, event borked: " + evt, t);
}
long time = _context.clock().now() - before;
if ( (time > 1000) && (_log != null) && (_log.shouldLog(Log.WARN)) )
_log.warn("wtf, event execution took " + time + ": " + evt);
}
}
}
private void log(String msg, Throwable t) {
synchronized (this) {
if (_log == null)
_log = I2PAppContext.getGlobalContext().logManager().getLog(SimpleTimer.class);
}
_log.log(Log.CRIT, msg, t);
}
}

View File

@ -14,7 +14,7 @@ import java.security.SecureRandom;
import net.i2p.I2PAppContext;
import net.i2p.crypto.EntropyHarvester;
import gnu.crypto.prng.Fortuna;
import gnu.crypto.prng.AsyncFortunaStandalone;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
@ -26,13 +26,13 @@ import java.io.IOException;
*
*/
public class FortunaRandomSource extends RandomSource implements EntropyHarvester {
private Fortuna _fortuna;
private AsyncFortunaStandalone _fortuna;
private double _nextGaussian;
private boolean _haveNextGaussian;
public FortunaRandomSource(I2PAppContext context) {
super(context);
_fortuna = new Fortuna();
_fortuna = new AsyncFortunaStandalone();
byte seed[] = new byte[1024];
if (initSeed(seed)) {
_fortuna.seed(seed);
@ -41,6 +41,7 @@ public class FortunaRandomSource extends RandomSource implements EntropyHarveste
sr.nextBytes(seed);
_fortuna.seed(seed);
}
_fortuna.startup();
// kickstart it
_fortuna.nextBytes(seed);
_haveNextGaussian = false;
@ -76,15 +77,33 @@ public class FortunaRandomSource extends RandomSource implements EntropyHarveste
if (n<=0)
throw new IllegalArgumentException("n must be positive");
if ((n & -n) == n) // i.e., n is a power of 2
return (int)((n * (long)nextBits(31)) >> 31);
////
// this shortcut from sun's docs neither works nor is necessary.
//
//if ((n & -n) == n) {
// // i.e., n is a power of 2
// return (int)((n * (long)nextBits(31)) >> 31);
//}
int bits, val;
do {
bits = nextBits(31);
val = bits % n;
} while(bits - val + (n-1) < 0);
return val;
int numBits = 0;
int remaining = n;
int rv = 0;
while (remaining > 0) {
remaining >>= 1;
rv += nextBits(8) << numBits*8;
numBits++;
}
if (rv < 0)
rv += n;
return rv % n;
//int bits, val;
//do {
// bits = nextBits(31);
// val = bits % n;
//} while(bits - val + (n-1) < 0);
//
//return val;
}
/**
@ -157,11 +176,16 @@ public class FortunaRandomSource extends RandomSource implements EntropyHarveste
* through 2^numBits-1
*/
protected synchronized int nextBits(int numBits) {
int rv = 0;
long rv = 0;
int bytes = (numBits + 7) / 8;
for (int i = 0; i < bytes; i++)
rv += ((_fortuna.nextByte() & 0xFF) << i*8);
return rv;
//rv >>>= (64-numBits);
if (rv < 0)
rv = 0 - rv;
int off = 8*bytes - numBits;
rv >>>= off;
return (int)rv;
}
public EntropyHarvester harvester() { return this; }
@ -175,4 +199,26 @@ public class FortunaRandomSource extends RandomSource implements EntropyHarveste
public synchronized void feedEntropy(String source, byte[] data, int offset, int len) {
_fortuna.addRandomBytes(data, offset, len);
}
public static void main(String args[]) {
try {
RandomSource rand = I2PAppContext.getGlobalContext().random();
if (true) {
for (int i = 0; i < 1000; i++)
if (rand.nextFloat() < 0)
throw new RuntimeException("negative!");
System.out.println("All positive");
return;
}
java.io.ByteArrayOutputStream baos = new java.io.ByteArrayOutputStream();
java.util.zip.GZIPOutputStream gos = new java.util.zip.GZIPOutputStream(baos);
for (int i = 0; i < 1024*1024; i++) {
int c = rand.nextInt(256);
gos.write((byte)c);
}
gos.finish();
byte compressed[] = baos.toByteArray();
System.out.println("Compressed size of 1MB: " + compressed.length);
} catch (Exception e) { e.printStackTrace(); }
}
}

View File

@ -48,6 +48,12 @@ public class I2PThread extends Thread {
if ( (_log == null) || (_log.shouldLog(Log.DEBUG)) )
_createdBy = new Exception("Created by");
}
public I2PThread(Runnable r, String name, boolean isDaemon) {
super(r, name);
setDaemon(isDaemon);
if ( (_log == null) || (_log.shouldLog(Log.DEBUG)) )
_createdBy = new Exception("Created by");
}
private void log(int level, String msg) { log(level, msg, null); }
private void log(int level, String msg, Throwable t) {
@ -113,4 +119,4 @@ public class I2PThread extends Thread {
} catch (Throwable tt) { // nop
}
}
}
}

View File

@ -31,20 +31,24 @@ public class SimpleTimer {
_context = I2PAppContext.getGlobalContext();
_log = _context.logManager().getLog(SimpleTimer.class);
_events = new TreeMap();
_eventTimes = new HashMap(1024);
_eventTimes = new HashMap(256);
_readyEvents = new ArrayList(4);
I2PThread runner = new I2PThread(new SimpleTimerRunner());
runner.setName(name);
runner.setDaemon(true);
runner.start();
for (int i = 0; i < 3; i++) {
I2PThread executor = new I2PThread(new Executor());
I2PThread executor = new I2PThread(new Executor(_context, _log, _readyEvents));
executor.setName(name + "Executor " + i);
executor.setDaemon(true);
executor.start();
}
}
public void reschedule(TimedEvent event, long timeoutMs) {
addEvent(event, timeoutMs, false);
}
/**
* Queue up the given event to be fired no sooner than timeoutMs from now.
* However, if this event is already scheduled, the event will be scheduled
@ -52,7 +56,12 @@ public class SimpleTimer {
* timeout. If this is not the desired behavior, call removeEvent first.
*
*/
public void addEvent(TimedEvent event, long timeoutMs) {
public void addEvent(TimedEvent event, long timeoutMs) { addEvent(event, timeoutMs, true); }
/**
* @param useEarliestTime if its already scheduled, use the earlier of the
* two timeouts, else use the later
*/
public void addEvent(TimedEvent event, long timeoutMs, boolean useEarliestTime) {
int totalEvents = 0;
long now = System.currentTimeMillis();
long eventTime = now + timeoutMs;
@ -61,11 +70,20 @@ public class SimpleTimer {
// remove the old scheduled position, then reinsert it
Long oldTime = (Long)_eventTimes.get(event);
if (oldTime != null) {
if (oldTime.longValue() < eventTime) {
_events.notifyAll();
return; // already scheduled for sooner than requested
if (useEarliestTime) {
if (oldTime.longValue() < eventTime) {
_events.notifyAll();
return; // already scheduled for sooner than requested
} else {
_events.remove(oldTime);
}
} else {
_events.remove(oldTime);
if (oldTime.longValue() > eventTime) {
_events.notifyAll();
return; // already scheduled for later than the given period
} else {
_events.remove(oldTime);
}
}
}
while (_events.containsKey(time))
@ -96,7 +114,7 @@ public class SimpleTimer {
long timeToAdd = System.currentTimeMillis() - now;
if (timeToAdd > 50) {
if (_log.shouldLog(Log.WARN))
_log.warn("timer contention: took " + timeToAdd + "ms to add a job");
_log.warn("timer contention: took " + timeToAdd + "ms to add a job with " + totalEvents + " queued");
}
}
@ -123,14 +141,6 @@ public class SimpleTimer {
public void timeReached();
}
private void log(String msg, Throwable t) {
synchronized (this) {
if (_log == null)
_log = I2PAppContext.getGlobalContext().logManager().getLog(SimpleTimer.class);
}
_log.log(Log.CRIT, msg, t);
}
private long _occurredTime;
private long _occurredEventCount;
private TimedEvent _recentEvents[] = new TimedEvent[5];
@ -210,30 +220,5 @@ public class SimpleTimer {
}
}
}
private class Executor implements Runnable {
public void run() {
while (true) {
TimedEvent evt = null;
synchronized (_readyEvents) {
if (_readyEvents.size() <= 0)
try { _readyEvents.wait(); } catch (InterruptedException ie) {}
if (_readyEvents.size() > 0)
evt = (TimedEvent)_readyEvents.remove(0);
}
if (evt != null) {
long before = _context.clock().now();
try {
evt.timeReached();
} catch (Throwable t) {
log("wtf, event borked: " + evt, t);
}
long time = _context.clock().now() - before;
if ( (time > 1000) && (_log != null) && (_log.shouldLog(Log.WARN)) )
_log.warn("wtf, event execution took " + time + ": " + evt);
}
}
}
}
}

View File

@ -1,292 +0,0 @@
package org.bouncycastle.crypto.digests;
/*
* Copyright (c) 2000 - 2004 The Legion Of The Bouncy Castle
* (http://www.bouncycastle.org)
*
* Permission is hereby granted, free of charge, to any person
* obtaining a copy of this software and associated
* documentation files (the "Software"), to deal in the Software
* without restriction, including without limitation the rights to
* use, copy, modify, merge, publish, distribute, sublicense, and/or
* sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following
* conditions:
*
* The above copyright notice and this permission notice shall be
* included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
* OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
* WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
/**
* FIPS 180-2 implementation of SHA-256.
*
* <pre>
* block word digest
* SHA-1 512 32 160
* SHA-256 512 32 256
* SHA-384 1024 64 384
* SHA-512 1024 64 512
* </pre>
*/
public class SHA256Digest
extends GeneralDigest
{
private static final int DIGEST_LENGTH = 32;
private int H1, H2, H3, H4, H5, H6, H7, H8;
private int[] X = new int[64];
private int xOff;
/**
* Standard constructor
*/
public SHA256Digest()
{
reset();
}
/**
* Copy constructor. This will copy the state of the provided
* message digest.
*/
public SHA256Digest(SHA256Digest t)
{
super(t);
H1 = t.H1;
H2 = t.H2;
H3 = t.H3;
H4 = t.H4;
H5 = t.H5;
H6 = t.H6;
H7 = t.H7;
H8 = t.H8;
System.arraycopy(t.X, 0, X, 0, t.X.length);
xOff = t.xOff;
}
public String getAlgorithmName()
{
return "SHA-256";
}
public int getDigestSize()
{
return DIGEST_LENGTH;
}
protected void processWord(
byte[] in,
int inOff)
{
X[xOff++] = ((in[inOff] & 0xff) << 24) | ((in[inOff + 1] & 0xff) << 16)
| ((in[inOff + 2] & 0xff) << 8) | ((in[inOff + 3] & 0xff));
if (xOff == 16)
{
processBlock();
}
}
private void unpackWord(
int word,
byte[] out,
int outOff)
{
out[outOff] = (byte)(word >>> 24);
out[outOff + 1] = (byte)(word >>> 16);
out[outOff + 2] = (byte)(word >>> 8);
out[outOff + 3] = (byte)word;
}
protected void processLength(
long bitLength)
{
if (xOff > 14)
{
processBlock();
}
X[14] = (int)(bitLength >>> 32);
X[15] = (int)(bitLength & 0xffffffff);
}
public int doFinal(
byte[] out,
int outOff)
{
finish();
unpackWord(H1, out, outOff);
unpackWord(H2, out, outOff + 4);
unpackWord(H3, out, outOff + 8);
unpackWord(H4, out, outOff + 12);
unpackWord(H5, out, outOff + 16);
unpackWord(H6, out, outOff + 20);
unpackWord(H7, out, outOff + 24);
unpackWord(H8, out, outOff + 28);
reset();
return DIGEST_LENGTH;
}
/**
* reset the chaining variables
*/
public void reset()
{
super.reset();
/* SHA-256 initial hash value
* The first 32 bits of the fractional parts of the square roots
* of the first eight prime numbers
*/
H1 = 0x6a09e667;
H2 = 0xbb67ae85;
H3 = 0x3c6ef372;
H4 = 0xa54ff53a;
H5 = 0x510e527f;
H6 = 0x9b05688c;
H7 = 0x1f83d9ab;
H8 = 0x5be0cd19;
xOff = 0;
for (int i = 0; i != X.length; i++)
{
X[i] = 0;
}
}
protected void processBlock()
{
//
// expand 16 word block into 64 word blocks.
//
for (int t = 16; t <= 63; t++)
{
X[t] = Theta1(X[t - 2]) + X[t - 7] + Theta0(X[t - 15]) + X[t - 16];
}
//
// set up working variables.
//
int a = H1;
int b = H2;
int c = H3;
int d = H4;
int e = H5;
int f = H6;
int g = H7;
int h = H8;
for (int t = 0; t <= 63; t++)
{
int T1, T2;
T1 = h + Sum1(e) + Ch(e, f, g) + K[t] + X[t];
T2 = Sum0(a) + Maj(a, b, c);
h = g;
g = f;
f = e;
e = d + T1;
d = c;
c = b;
b = a;
a = T1 + T2;
}
H1 += a;
H2 += b;
H3 += c;
H4 += d;
H5 += e;
H6 += f;
H7 += g;
H8 += h;
//
// reset the offset and clean out the word buffer.
//
xOff = 0;
for (int i = 0; i != X.length; i++)
{
X[i] = 0;
}
}
private int rotateRight(
int x,
int n)
{
return (x >>> n) | (x << (32 - n));
}
/* SHA-256 functions */
private int Ch(
int x,
int y,
int z)
{
return ((x & y) ^ ((~x) & z));
}
private int Maj(
int x,
int y,
int z)
{
return ((x & y) ^ (x & z) ^ (y & z));
}
private int Sum0(
int x)
{
return rotateRight(x, 2) ^ rotateRight(x, 13) ^ rotateRight(x, 22);
}
private int Sum1(
int x)
{
return rotateRight(x, 6) ^ rotateRight(x, 11) ^ rotateRight(x, 25);
}
private int Theta0(
int x)
{
return rotateRight(x, 7) ^ rotateRight(x, 18) ^ (x >>> 3);
}
private int Theta1(
int x)
{
return rotateRight(x, 17) ^ rotateRight(x, 19) ^ (x >>> 10);
}
/* SHA-256 Constants
* (represent the first 32 bits of the fractional parts of the
* cube roots of the first sixty-four prime numbers)
*/
static final int K[] = {
0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5, 0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5,
0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3, 0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174,
0xe49b69c1, 0xefbe4786, 0x0fc19dc6, 0x240ca1cc, 0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da,
0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7, 0xc6e00bf3, 0xd5a79147, 0x06ca6351, 0x14292967,
0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13, 0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85,
0xa2bfe8a1, 0xa81a664b, 0xc24b8b70, 0xc76c51a3, 0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070, 0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5, 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3,
0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208, 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2
};
}

View File

@ -1,4 +1,587 @@
$Id: history.txt,v 1.401 2006/02/16 05:33:31 jrandom Exp $
$Id: history.txt,v 1.502 2006-07-28 23:43:04 jrandom Exp $
* 2006-07-28 0.6.1.24 released
2006-07-28 jrandom
* Don't try to reverify too many netDb entries at once (thanks
cervantes and Complication!)
2006-07-28 jrandom
* Actually fix the threading deadlock issue in the netDb (removing
the synchronized access to individual kbuckets while validating
individual entries) (thanks cervantes, postman, frosk, et al!)
* 2006-07-27 0.6.1.23 released
2006-07-27 jrandom
* Cut down NTCP connection establishments once we know the peer is skewed
(rather than wait for full establishment before verifying)
* Removed a lock on the stats framework when accessing rates, which
shouldn't be a problem, assuming rates are created (pretty much) all at
once and merely updated during the lifetime of the jvm.
2006-07-27 jrandom
* Further NTCP write status cleanup
* Handle more oddly-timed NTCP disconnections (thanks bar!)
2006-07-26 jrandom
* When dropping a netDb router reference, only accept newer
references as part of the update check
* If we have been up for a while, don't accept really old
router references (published 2 or more days ago)
* Drop router references once they are no longer valid, even if
they were allowed in due to the lax restrictions on startup
2006-07-26 jrandom
* Every time we create a new router identity, add an entry to the
new "identlog.txt" text file in the I2P install directory. For
debugging purposes, publish the count of how many identities the
router has cycled through, though not the identities itself.
* Cleaned up the way the multitransport shitlisting worked, and
added per-transport shitlists
* When dropping a router reference locally, first fire a netDb
lookup for the entry
* Take the peer selection filters into account when organizing the
profiles (thanks Complication!)
* Avoid some obvious configuration errors for the NTCP transport
(invalid ports, "null" ip, etc)
* Deal with some small NTCP bugs found in the wild (unresolveable
hosts, strange network discons, etc)
* Send our netDb info to peers we have direct NTCP connections to
after each 6-12 hours of connection uptime
* Clean up the NTCP reading and writing queue logic to avoid some
potential delays
* Allow people to specify the IP that the SSU transport binds on
locally, via the advanced config "i2np.udp.bindInterface=1.2.3.4"
* 2006-07-18 0.6.1.22 released
2006-07-18 jrandom
* Add a failsafe to the NTCP transport to make sure we keep
pumping writes when we should.
* Properly reallow 16-32KBps routers in the default config
(thanks Complication!)
2006-07-16 Complication
* Collect tunnel build agree/reject/expire statistics
for each bandwidth tier of peers (and peers of unknown tiers,
even if those shouldn't exist)
2006-07-14 jrandom
* Improve the multitransport shitlisting (thanks Complication!)
* Allow routers with a capacity of 16-32KBps to be used in tunnels under
the default configuration (thanks for the stats Complication!)
* Properly allow older router references to load on startup
(thanks bar, Complication, et al!)
* Add a new "i2p.alwaysAllowReseed" advanced config property, though
hopefully today's changes should make this unnecessary (thanks void!)
* Improved NTCP buffering
* Close NTCP connections if we are too backlogged when writing to them
2006-07-04 jrandom
* New NIO-based tcp transport (NTCP), enabled by default for outbound
connections only. Those who configure their NAT/firewall to allow
inbound connections and specify the external host and port
(dyndns/etc is ok) on /config.jsp can receive inbound connections.
SSU is still enabled for use by default for all users as a fallback.
* Substantial bugfix to the tunnel gateway processing to transfer
messages sequentially instead of interleaved
* Renamed GNU/crypto classes to avoid name clashes with kaffe and other
GNU/Classpath based JVMs
* Adjust the Fortuna PRNG's pooling system to reduce contention on
refill with a background thread to refill the output buffer
* Add per-transport support for the shitlist
* Add a new async pumped tunnel gateway to reduce tunnel dispatcher
contention
2006-07-01 Complication
* Ensure that the I2PTunnel web interface won't update tunnel settings
for shared clients when a non-shared client is modified
(thanks for spotting, BarkerJr!)
2006-06-14 cervantes
* Small tweak to I2PTunnel CSS, so it looks better with desktops
that use Bitstream Vera fonts @ 96 dpi
* 2006-06-14 0.6.1.21 released
2006-06-13 jrandom
* Use a minimum uptime of 2 hours, not 4 (oops)
2006-06-13 jrandom
* Cut down the proactive rejections due to queue size - if we are
at the point of having decrypted the request off the queue, might
as well let it through, rather than waste that decryption
2006-06-11 Kloug
* Bugfix to the I2PTunnel IRC filter to support multiple concurrent
outstanding pings/pongs
2006-06-10 jrandom
* Further reduction in proactive rejections
2006-06-09 jrandom
* Don't let the pending tunnel request queue grow beyond reason
(letting things sit for up to 30s when they fail after 10s
seems a bit... off)
2006-06-08 jrandom
* Be more conservative in the proactive rejections
2006-06-04 Complication
* Trim out sending a blank line before USER in susimail.
Seemed to break in rare cases, thanks for reporting, Brachtus!
* 2006-06-04 0.6.1.20 released
2006-06-04 jrandom
* Reduce the SSU ack frequency
* Tweaked the tunnel rejection settings to reject less aggressively
2006-05-31 jrandom
* Only send netDb searches to the floodfill peers for the time being
* Add some proof of concept filters for tunnel participation. By default,
it will skip peers with an advertised bandwith of less than 32KBps or
an advertised uptime of less than 2 hours. If this is sufficient, a
safer implementation of these filters will be implemented.
* 2006-05-18 0.6.1.19 released
2006-05-18 jrandom
* Made the SSU ACKs less frequent when possible
2006-05-17 Complication
* Fix some oversights in my previous changes:
adjust some loglevels, make a few statements less wasteful,
make one comparison less confusing and more likely to log unexpected values
2006-05-17 jrandom
* Make the peer page sortable
* SSU modifications to cut down on unnecessary connection failures
2006-05-16 jrandom
* Further shitlist randomizations
* Adjust the stats monitored for detecting cpu overload when dropping new
tunnel requests
2006-05-15 jrandom
* Add a load dependent throttle on the pending inbound tunnel request
backlog
* Increased the tunnel test failure slack before killing a tunnel
2006-05-13 Complication
* Separate growth factors for tunnel count and tunnel test time
* Reduce growth factors, so probabalistic throttle would activate
* Square probAccept values to decelerate stronger when far from average
* Create a bandwidth stat with approximately 15-second half life
* Make allowTunnel() check the 1-second bandwidth for overload
before doing allowance calculations using 15-second bandwidth
* Tweak the overload detector in BuildExecutor to be more sensitive
for rising edges, add ability to initiate tunnel drops
* Add a function to seek and drop the highest-rate participating tunnel,
keeping a fixed+random grace period between such drops.
It doesn't seem very effective, so disabled by default
("router.dropTunnelsOnOverload=true" to enable)
2006-05-11 jrandom
* PRNG bugfix (thanks cervantes and Complication!)
* 2006-05-09 0.6.1.18 released
2006-05-09 jrandom
* Further tunnel creation timeout revamp
2006-05-07 Complication
* Fix problem whereby repeated calls to allowed() would make
the 1-tunnel exception permit more than one concurrent build
2006-05-06 jrandom
* Readjust the tunnel creation timeouts to reject less but fail earlier,
while tracking the extended timeout events.
2006-05-04 jrandom
* Short circuit a highly congested part of the stat logging unless its
required (may or may not help with a synchronization issue reported by
andreas)
2006-05-03 Complication
* Allow a single build attempt to proceed despite 1-minute overload
only if the 1-second rate shows enough spare bandwidth
(e.g. overload has already eased)
2006-05-02 Complication
* Correct a misnamed property in SummaryHelper.java
to avoid confusion
* Make the maximum allowance of our own concurrent
tunnel builds slightly adaptive: one concurrent build per 6 KB/s
within the fixed range 2..10
* While overloaded, try to avoid completely choking our own build attempts,
instead prefer limiting them to 1
2006-05-01 jrandom
* Adjust the tunnel build timeouts to cut down on expirations, and
increased the SSU connection establishment retransmission rate to
something less glacial.
* For the first 5 minutes of uptime, be less aggressive with tunnel
exploration, opting for more reliable peers to start with.
2006-05-01 jrandom
* Fix for a netDb lookup race (thanks cervantes!)
2006-04-27 jrandom
* Avoid a race in the message reply registry (thanks cervantes!)
2006-04-27 jrandom
* Fixed the tunnel expiration desync code (thanks Complication!)
* 2006-04-23 0.6.1.17 released
2006-04-19 jrandom
* Adjust how we pick high capacity peers to allow the inclusion of fast
peers (the previous filter assumed an old usage pattern)
* New set of stats to help track per-packet-type bandwidth usage better
* Cut out the proactive tail drop from the SSU transport, for now
* Reduce the frequency of tunnel build attempts while we're saturated
* Don't drop tunnel requests as easily - prefer to explicitly reject them
* 2006-04-15 0.6.1.16 released
2006-04-15 jrandom
* Adjust the proactive tunnel request dropping so we will reject what we
can instead of dropping so much (but still dropping if we get too far
overloaded)
2006-04-14 jrandom
* 0 isn't very random
* Adjust the tunnel drop to be more reasonable
2006-04-14 jrandom
* -28.00230115311259 is not between 0 and 1 in any universe I know.
* Made the bw-related tunnel join throttle much simpler
2006-04-14 jrandom
* Make some more stats graphable, and allow some internal tweaking on the
tunnel pairing for creation and testing.
* 2006-04-13 0.6.1.15 released
2006-04-12 jrandom
* Added a further failsafe against trying to queue up too many messages to
a peer.
2006-04-12 jrandom
* Watch out for failed syndie index fetches (thanks bar!)
2006-04-11 jrandom
* Throttling improvements on SSU - throttle all transmissions to a peer
when we are retransmitting, not just retransmissions. Also, if
we're already retransmitting to a peer, probabalistically tail drop new
messages targetting that peer, based on the estimated wait time before
transmission.
* Fixed the rounding error in the inbound tunnel drop probability.
2006-04-10 jrandom
* Include a combined send/receive graph (good idea cervantes!)
* Proactively drop inbound tunnel requests probabalistically as the
estimated queue time approaches our limit, rather than letting them all
through up to that limit.
2006-04-08 jrandom
* Stat summarization fix (removing the occational holes in the jrobin
graphs)
2006-04-08 jrandom
* Process inbound tunnel requests more efficiently
* Proactively drop inbound tunnel requests if the queue before we'd
process it in is too long (dynamically adjusted by cpu load)
* Adjust the tunnel rejection throttle to reject requeusts when we have to
proactively drop too many requests.
* Display the number of pending inbound tunnel join requests on the router
console (as the "handle backlog")
* Include a few more stats in the default set of graphs
2006-04-06 jrandom
* Fix for a bug in the new irc ping/pong filter (thanks Complication!)
2006-04-06 jrandom
* Fixed a typo in the reply cleanup code
* 2006-04-05 0.6.1.14 released
2006-04-05 jrandom
* Cut down on the time that we allow a tunnel creation request to sit by
without response, and reject tunnel creation requests that are lagged
locally. Also switch to a bounded FIFO instead of a LIFO
* Threading tweaks for the message handling (thanks bar!)
* Don't add addresses to syndie with blank names (thanks Complication!)
* Further ban clearance
2006-04-05 jrandom
* Fix during the ssu handshake to avoid an unnecessary failure on
packet retransmission (thanks ripple!)
* Fix during the SSU handshake to use the negotiated session key asap,
rather than using the intro key for more than we should (thanks ripple!)
* Fixes to the message reply registry (thanks Complication!)
* More comprehensive syndie banning (for repeated pushes)
* Publish the router's ballpark bandwidth limit (w/in a power of 2), for
testing purposes
* Put a floor back on the capacity threshold, so too many failing peers
won't cause us to pick very bad peers (unless we have very few good
ones)
* Bugfix to cut down on peers using introducers unneessarily (thanks
Complication!)
* Reduced the default streaming lib message size to fit into a single
tunnel message, rather than require 5 tunnel messages to be transferred
without loss before recomposition. This reduces throughput, but should
increase reliability, at least for the time being.
* Misc small bugfixes in the router (thanks all!)
* More tweaking for Syndie's CSS (thanks Doubtful Salmon!)
2006-04-01 jrandom
* Take out the router watchdog's teeth (don't restart on leaseset failure)
* Filter the IRC ping/pong messages, as some clients send unsafe
information in them (thanks aardvax and dust!)
2006-03-30 jrandom
* Substantially reduced the lock contention in the message registry (a
major hotspot that can choke most threads). Also reworked the locking
so we don't need per-message timer events
* No need to have additional per-peer message clearing, as they are
either unregistered individually or expired.
* Include some of the more transient tunnel throttling
* 2006-03-26 0.6.1.13 released
2006-03-25 jrandom
* Added a simple purge and ban of syndie authors, shown as the
"Purge and ban" button on the addressbook for authors that are already
on the ignore list. All of their entries and metadata are deleted from
the archive, and the are transparently filtered from any remote
syndication (so no user on the syndie instance will pull any new posts
from them)
* More strict tunnel join throtting when congested
2006-03-24 jrandom
* Try to desync tunnel building near startup (thanks Complication!)
* If we are highly congested, fall back on only querying the floodfill
netDb peers, and only storing to those peers too
* Cleaned up the floodfill-only queries
2006-03-21 jrandom
* Avoid a very strange (unconfirmed) bug that people using the systray's
browser picker dialog could cause by disabling the GUI-based browser
picker.
* Cut down on subsequent streaming lib reset packets transmitted
* Use a larger MTU more often
* Allow netDb searches to query shitlisted peers, as the queries are
indirect.
* Add an option to disable non-floodfill netDb searches (non-floodfill
searches are used by default, but can be disabled by adding
netDb.floodfillOnly=true to the advanced config)
2006-03-20 jrandom
* Fix to allow for some slack when coalescing stats
* Workaround some oddball errors
2006-03-18 jrandom
* Added a new graphs.jsp page to show all of the stats being harvested
2006-03-18 jrandom
* Made the netDb search load limitations a little less stringent
* Add support for specifying the number of periods to be plotted on the
graphs - e.g. to plot only the last hour of a stat that is averaged at
the 60 second period, add &periodCount=60
2006-03-17 jrandom
* Add support for graphing the event count as well as the average stat
value (done by adding &showEvents=true to the URL). Also supports
hiding the legend (&hideLegend=true), the grid (&hideGrid=true), and
the title (&hideTitle=true).
* Removed an unnecessary arbitrary filter on the profile organizer so we
can pick high capacity and fast peers more appropriately
2006-03-16 jrandom
* Integrate basic hooks for jrobin (http://jrobin.org) into the router
console. Selected stats can be harvested automatically and fed into
in-memory RRD databases, and those databases can be served up either as
PNG images or as RRDtool compatible XML dumps (see oldstats.jsp for
details). A base set of stats are harvested by default, but an
alternate list can be specified by setting the 'stat.summaries' list on
the advanced config. For instance:
stat.summaries=bw.recvRate.60000,bw.sendRate.60000
* HTML tweaking for the general config page (thanks void!)
* Odd NPE fix (thanks Complication!)
2006-03-15 Complication
* Trim out an old, inactive IP second-guessing method
(thanks for spotting, Anonymous!)
2006-03-15 jrandom
* Further stat cleanup
* Keep track of how many peers we are actively trying to communicate with,
beyond those who are just trying to communicate with us.
* Further router tunnel participation throttle revisions to avoid spurious
rejections
* Rate stat display cleanup (thanks ripple!)
* Don't even try to send messages that have been queued too long
2006-03-05 zzz
* Remove the +++--- from the logs on i2psnark startup
2006-03-05 jrandom
* HTML fixes in Syndie to work better with opera (thanks shaklen!)
* Give netDb lookups to floodfill peers more time, as they are much more
likely to succeed (thereby cutting down on the unnecessary netDb
searches outside the floodfill set)
* Fix to the SSU IP detection code so we won't use introducers when we
don't need them (thanks Complication!)
* Add a brief shitlist to i2psnark so it doesn't keep on trying to reach
peers given to it
* Don't let netDb searches wander across too many peers
* Don't use the 1s bandwidth usage in the tunnel participation throttle,
as its too volatile to have much meaning.
* Don't bork if a Syndie post is missing an entry.sml
2006-03-05 Complication
* Reduce exposed statistical information,
to make build and uptime tracking more expensive
2006-03-04 Complication
* Fix the announce URL of orion's tracker in Snark sources
2006-03-03 Complication
* Explicit check for an index out of bounds exception while parsing
an inbound IRC command (implicit check was there already)
2006-03-01 jrandom
* More aggressive tunnel throttling as we approach our bandwidth limit,
and throttle based off periods wider than 1 second.
* Included Doubtful Salmon's syndie stylings (thanks!)
2006-02-27 zzz
* Update error page templates to add \r, Connection: close, and
Proxy-connection: close to headers.
* 2006-02-27 0.6.1.12 released
2006-02-27 jrandom
* Adjust the jbigi.jar to use the athlon-optimized jbigi on windows/amd64
machines, rather than the generic jbigi (until we have an athlon64
optimized version)
2006-02-26 jrandom
* Switch from the bouncycastle to the gnu-crypto implementation for
SHA256, as benchmarks show a 10-30% speedup.
* Removed some unnecessary object caches
* Don't close i2psnark streams prematurely
2006-02-25 jrandom
* Made the Syndie permalinks in the thread view point to the blog view
* Disabled TCP again (since the live net seems to be doing well w/out it)
* Fix the message time on inbound SSU establishment (thanks zzz!)
* Don't be so aggressive with parallel tunnel creation when a tunnel pool
just starts up
2006-02-24 jrandom
* Rounding calculation cleanup in the stats, and avoid an uncontested
mutex (thanks ripple!)
* SSU handshake cleanup to help force incompatible peers to stop nagging
us by both not giving them an updated reference to us and by dropping
future handshake packets from them.
2006-02-23 jrandom
* Increase the SSU retransmit ceiling (for slow links)
* Estimate the sender's SSU MTU (to help see if we agree)
2006-02-22 jrandom
* Fix to properly profile tunnel joins (thanks Ragnarok, frosk, et al!)
* More aggressive poor-man's PMTU, allowing larger MTUs on less reliable
links
* Further class validator refactorings
2006-02-22 jrandom
* Handle a rare race under high bandwidth situations in the SSU transport
* Minor refactoring so we don't confuse sun's 1.6.0-b2 validator
2006-02-21 Complication
* Reactivate TCP tranport by default, in addition to re-allowing
* 2006-02-21 0.6.1.11 released
2006-02-21 jrandom
* Throttle the outbound SSU establishment queue, so it doesn't fill up the
heap when backlogged (and so that the messages queued up on it don't sit
there forever)
* Further SSU memory cleanup
* Clean up the address regeneration code so it knows when to rebuild the
local info more precisely.
2006-02-20 jrandom
* Properly enable TCP this time (oops)
* Deal with multiple form handlers on the same page in the console without
being too annoying (thanks blubb and bd_!)
2006-02-20 jrandom
* Reenable the TCP transport as a fallback (we'll continue to muck with
debugging SSU-only elsewhere)
2006-02-20 jrandom
* Major SSU and router tuning to reduce contention, memory usage, and GC
churn. There are still issues to be worked out, but this should be a
substantial improvement.
* Modified the optional netDb harvester task to support choosing whether
to use (non-anonymous) direct connections or (anonymous) exploratory
tunnels to do the harvesting. Harvesting itself is enabled via the
advanced config "netDb.shouldHarvest=true" (default is false) and the
connection type can be chosen via "netDb.harvestDirectly=false" (default
is false).
2006-02-19 dust
* Added pruning of suckers history (it used to grow indefinitely).
2006-02-19 jrandom
* Moved the current net's reseed URL to a different location than where
the old net looks (dev.i2p.net/i2pdb2/ vs .../i2pdb/)
* More aggressively expire inbound messages (on receive, not just on send)
* Add in a hook for breaking backwards compatibility in the SSU wire
protocol directly by including a version as part of the handshake. The
version is currently set to 0, however, so the wire protocol from this
build is compatible with all earlier SSU implementations.
* Increased the number of complete message readers, cutting down
substantially on the delay processing inbound messages.
* Delete the message history file on startup
* Reworked the restart/shutdown display on the console (thanks bd_!)
2006-02-18 jrandom
* Migrate the outbound packets from a central component to the individual
per-peer components, substantially cutting down on lock contention when
dealing with higher degrees.
* Load balance the outbound SSU transfers evenly across peers, rather than
across messages (so peers with few messages won't be starved by peers
with many).
* Reduce the frequency of router info rebuilds (thanks bar!)
2006-02-18 jrandom
* Add a new AIMD throttle in SSU to control the number of concurrent
messages being sent to a given peer, in addition to the throttle on the
number of concurrent bytes to that peer.
* Adjust the existing SSU outbound queue to throttle based on the queue's
lag, not an arbitrary number of packets.
2006-02-17 jrandom
* Properly fix the build request queue throttling, using queue age to
detect congestion, rather than queue size.
2006-02-17 jrandom
* Disable the message history log file by default (duh - feel free to
delete messageHistory.txt after upgrading. thanks deathfatty!)
* Limit the size of the inbound tunnel build request queue so we don't
get an insane backlog of requests that we're bound to reject, and adjust
the queue processing so we keep on churning through them when we've got
a backlog.
* Small fixes for the multiuser syndie operation (thanks Complication!)
* Renamed modified PRNG classes that were imported from gnu-crypto so we
don't conflict with JVMs using that as a JCE provider (thanks blx!)
* 2006-02-16 0.6.1.10 released

View File

@ -1,5 +1,5 @@
<i2p.news date="$Date: 2006/01/12 20:13:05 $">
<i2p.release version="0.6.1.10" date="2006/02/16" minVersion="0.6"
<i2p.news date="$Date: 2006-07-18 15:08:00 $">
<i2p.release version="0.6.1.24" date="2006/07/29" minVersion="0.6"
anonurl="http://i2p/NF2RLVUxVulR3IqK0sGJR0dHQcGXAzwa6rEO4WAWYXOHw-DoZhKnlbf1nzHXwMEJoex5nFTyiNMqxJMWlY54cvU~UenZdkyQQeUSBZXyuSweflUXFqKN-y8xIoK2w9Ylq1k8IcrAFDsITyOzjUKoOPfVq34rKNDo7fYyis4kT5bAHy~2N1EVMs34pi2RFabATIOBk38Qhab57Umpa6yEoE~rbyR~suDRvD7gjBvBiIKFqhFueXsR2uSrPB-yzwAGofTXuklofK3DdKspciclTVzqbDjsk5UXfu2nTrC1agkhLyqlOfjhyqC~t1IXm-Vs2o7911k7KKLGjB4lmH508YJ7G9fLAUyjuB-wwwhejoWqvg7oWvqo4oIok8LG6ECR71C3dzCvIjY2QcrhoaazA9G4zcGMm6NKND-H4XY6tUWhpB~5GefB3YczOqMbHq4wi0O9MzBFrOJEOs3X4hwboKWANf7DT5PZKJZ5KorQPsYRSq0E3wSOsFCSsdVCKUGsAAAA/i2p/i2pupdate.sud"
publicurl="http://dev.i2p.net/i2p/i2pupdate.sud"
anonannouncement="http://i2p/NF2RLVUxVulR3IqK0sGJR0dHQcGXAzwa6rEO4WAWYXOHw-DoZhKnlbf1nzHXwMEJoex5nFTyiNMqxJMWlY54cvU~UenZdkyQQeUSBZXyuSweflUXFqKN-y8xIoK2w9Ylq1k8IcrAFDsITyOzjUKoOPfVq34rKNDo7fYyis4kT5bAHy~2N1EVMs34pi2RFabATIOBk38Qhab57Umpa6yEoE~rbyR~suDRvD7gjBvBiIKFqhFueXsR2uSrPB-yzwAGofTXuklofK3DdKspciclTVzqbDjsk5UXfu2nTrC1agkhLyqlOfjhyqC~t1IXm-Vs2o7911k7KKLGjB4lmH508YJ7G9fLAUyjuB-wwwhejoWqvg7oWvqo4oIok8LG6ECR71C3dzCvIjY2QcrhoaazA9G4zcGMm6NKND-H4XY6tUWhpB~5GefB3YczOqMbHq4wi0O9MzBFrOJEOs3X4hwboKWANf7DT5PZKJZ5KorQPsYRSq0E3wSOsFCSsdVCKUGsAAAA/pipermail/i2p/2005-September/000878.html"

View File

@ -4,7 +4,7 @@
<info>
<appname>i2p</appname>
<appversion>0.6.1.10</appversion>
<appversion>0.6.1.24</appversion>
<authors>
<author name="I2P" email="support@i2p.net"/>
</authors>

View File

@ -19,3 +19,7 @@ the libg++.so.5 dependency that has been a problem for a few linux distros.
On Feb 8, 2006, the libjbigi-linux-viac3.so was added to jbigi.jar after
being compiled by jrandom on linux/p4 (cross compiled to --host=viac3)
On Feb 27, 2006, jbigi-win-athlon.dll was copied to jbigi-win-athlon64.dll,
as it should offer amd64 users better performance than jbigi-win-none.dll
until we get a full amd64 build.

Binary file not shown.

Binary file not shown.

View File

@ -1,7 +1,9 @@
HTTP/1.1 409 Conflict
Content-Type: text/html; charset=iso-8859-1
Cache-control: no-cache
HTTP/1.1 409 Conflict
Content-Type: text/html; charset=iso-8859-1
Cache-control: no-cache
Connection: close
Proxy-Connection: close
<html><head>
<title>Destination key conflict</title>
<style type='text/css'>

View File

@ -1,7 +1,9 @@
HTTP/1.1 504 Gateway Timeout
Content-Type: text/html; charset=iso-8859-1
Cache-control: no-cache
HTTP/1.1 504 Gateway Timeout
Content-Type: text/html; charset=iso-8859-1
Cache-control: no-cache
Connection: close
Proxy-Connection: close
<html><head>
<title>Eepsite not reachable</title>
<style type='text/css'>

View File

@ -1,7 +1,9 @@
HTTP/1.1 400 Destination Not Found
Content-Type: text/html; charset=iso-8859-1
Cache-control: no-cache
HTTP/1.1 400 Destination Not Found
Content-Type: text/html; charset=iso-8859-1
Cache-control: no-cache
Connection: close
Proxy-Connection: close
<html><head>
<title>Invalid eepsite destination</title>
<style type='text/css'>

View File

@ -1,7 +1,9 @@
HTTP/1.1 404 Domain Not Found
Content-Type: text/html; charset=iso-8859-1
Cache-control: no-cache
HTTP/1.1 404 Domain Not Found
Content-Type: text/html; charset=iso-8859-1
Cache-control: no-cache
Connection: close
Proxy-Connection: close
<html><head>
<title>Eepsite unknown</title>
<style type='text/css'>

View File

@ -1,7 +1,9 @@
HTTP/1.1 504 Gateway Timeout
Content-Type: text/html; charset=iso-8859-1
Cache-control: no-cache
HTTP/1.1 504 Gateway Timeout
Content-Type: text/html; charset=iso-8859-1
Cache-control: no-cache
Connection: close
Proxy-Connection: close
<html><head>
<title>Outproxy Not Found</title>
<style type='text/css'>

View File

@ -138,7 +138,7 @@
#tunnelListPage .footer label {
text-align : right;
height : 24px;
width : 160px;
width : 360px;
float : left;
}

View File

@ -122,7 +122,7 @@
#tunnelListPage .footer label {
text-align : right;
height : 24px;
width : 160px;
width : 360px;
float : left;
}

View File

@ -1,5 +1,5 @@
<i2p.news date="$Date: 2006/02/15 12:56:59 $">
<i2p.release version="0.6.1.10" date="2006/02/16" minVersion="0.6"
<i2p.news date="$Date: 2006-07-27 22:34:59 $">
<i2p.release version="0.6.1.24" date="2006/07/29" minVersion="0.6"
anonurl="http://i2p/NF2RLVUxVulR3IqK0sGJR0dHQcGXAzwa6rEO4WAWYXOHw-DoZhKnlbf1nzHXwMEJoex5nFTyiNMqxJMWlY54cvU~UenZdkyQQeUSBZXyuSweflUXFqKN-y8xIoK2w9Ylq1k8IcrAFDsITyOzjUKoOPfVq34rKNDo7fYyis4kT5bAHy~2N1EVMs34pi2RFabATIOBk38Qhab57Umpa6yEoE~rbyR~suDRvD7gjBvBiIKFqhFueXsR2uSrPB-yzwAGofTXuklofK3DdKspciclTVzqbDjsk5UXfu2nTrC1agkhLyqlOfjhyqC~t1IXm-Vs2o7911k7KKLGjB4lmH508YJ7G9fLAUyjuB-wwwhejoWqvg7oWvqo4oIok8LG6ECR71C3dzCvIjY2QcrhoaazA9G4zcGMm6NKND-H4XY6tUWhpB~5GefB3YczOqMbHq4wi0O9MzBFrOJEOs3X4hwboKWANf7DT5PZKJZ5KorQPsYRSq0E3wSOsFCSsdVCKUGsAAAA/i2p/i2pupdate.sud"
publicurl="http://dev.i2p.net/i2p/i2pupdate.sud"
anonannouncement="http://i2p/NF2RLVUxVulR3IqK0sGJR0dHQcGXAzwa6rEO4WAWYXOHw-DoZhKnlbf1nzHXwMEJoex5nFTyiNMqxJMWlY54cvU~UenZdkyQQeUSBZXyuSweflUXFqKN-y8xIoK2w9Ylq1k8IcrAFDsITyOzjUKoOPfVq34rKNDo7fYyis4kT5bAHy~2N1EVMs34pi2RFabATIOBk38Qhab57Umpa6yEoE~rbyR~suDRvD7gjBvBiIKFqhFueXsR2uSrPB-yzwAGofTXuklofK3DdKspciclTVzqbDjsk5UXfu2nTrC1agkhLyqlOfjhyqC~t1IXm-Vs2o7911k7KKLGjB4lmH508YJ7G9fLAUyjuB-wwwhejoWqvg7oWvqo4oIok8LG6ECR71C3dzCvIjY2QcrhoaazA9G4zcGMm6NKND-H4XY6tUWhpB~5GefB3YczOqMbHq4wi0O9MzBFrOJEOs3X4hwboKWANf7DT5PZKJZ5KorQPsYRSq0E3wSOsFCSsdVCKUGsAAAA/pipermail/i2p/2005-September/000878.html"
@ -10,13 +10,12 @@
anonlogs="http://i2p/Nf3ab-ZFkmI-LyMt7GjgT-jfvZ3zKDl0L96pmGQXF1B82W2Bfjf0n7~288vafocjFLnQnVcmZd~-p0-Oolfo9aW2Rm-AhyqxnxyLlPBqGxsJBXjPhm1JBT4Ia8FB-VXt0BuY0fMKdAfWwN61-tj4zIcQWRxv3DFquwEf035K~Ra4SWOqiuJgTRJu7~o~DzHVljVgWIzwf8Z84cz0X33pv-mdG~~y0Bsc2qJVnYwjjR178YMcRSmNE0FVMcs6f17c6zqhMw-11qjKpY~EJfHYCx4lBWF37CD0obbWqTNUIbL~78vxqZRT3dgAgnLixog9nqTO-0Rh~NpVUZnoUi7fNR~awW5U3Cf7rU7nNEKKobLue78hjvRcWn7upHUF45QqTDuaM3yZa7OsjbcH-I909DOub2Q0Dno6vIwuA7yrysccN1sbnkwZbKlf4T6~iDdhaSLJd97QCyPOlbyUfYy9QLNExlRqKgNVJcMJRrIual~Lb1CLbnzt0uvobM57UpqSAAAA/meeting141"
publiclogs="http://www.i2p.net/meeting141" />
&#149;
2006-02-16:
0.6.1.10 released with some major updates - it is <b>not</b> backwards compatible, so upgrading is essential.
<br>
2006-07-18: 0.6.1.22 <a href="http://dev.i2p/pipermail/i2p/2006-July/001300.html">released</a>
<br />
&#149;
2006-02-14:
<a href="http://dev.i2p/pipermail/i2p/2006-February/001260.html">status notes</a>
2006-06-13:
<a href="http://dev.i2p/pipermail/i2p/2006-June/001293.html">status notes</a>
and
<a href="http://www.i2p/meeting168">meeting log</a>
<br>
<a href="http://www.i2p/meeting182">meeting log</a>
<br />
</i2p.news>

Some files were not shown because too many files have changed in this diff Show More