commit 9f7ed247eaf720c10e9fa76bde61b1ce19e44cb6
Author: jrandom Syndie's code is entirely open source - unless otherwise specified, all
+code and content contained in the Syndie releases are put out into the
+public domain. Within the The index files all use the following format: Individual posts are found under
+ The Given the simple file-based archive hierarchy and index, running a public
+Syndie archive is trivial - simply publish your To enable people to upload posts to an HTTP archive, Syndie bundles an
+ The CGI accepts posts (uploaded through schedule and put),
+writing them to To fully enable users to upload posts, you will need to set up a
+recurring task on your OS running the following Syndie script on occation
+(once every hour is perhaps reasonable): This tells Syndie to pull in all of the An example upload form: The database schema itself is kept as part of the Syndie source code as
+ Also of interest are the database schema updates -
+ Development discussions for Syndie go on within the
+I2P development team, on their
+development list,
+forum,
+weekly meetings, and
+IRC channel. Installation packages for download are
+built with various ant targets: All of the packages can be built into With a modern GCC/GCJ (releases prior to 4.0 will fail when they try to
+write to the database), you can build native Syndie executables. On *nix/osx,
+" Even small contributions of $10 or $20 help offset these costs,
+and if you can help, please do so
+(note that the contribution is for Syndie in the memo field). The GPG public key used to sign the distributed files is Older releases are archived To uninstall, if you used the To upgrade or reinstall, simply install Syndie again on top of itself.
+Upgrading or reinstalling does not affect your content or keys, just the
+software. To completely wipe any old data, identities, or keys, delete the
+ To run multiple separate Syndie instances, you can specify an alternate
+data root directory on the Syndie command line
+( These packages include the java 1.4 compiled HSQLDB 1.8.0.5 - you can
+replace the included lib/hsqldb.jar with newer versions, or remove it and
+adjust the What do forum/blogging tools have to do with providing strong anonymity? The answer: *everything*. To briefly summarize: Alternately, you can review some of Syndie's
+use cases. (irc log edited for clarity) Short answer: its probably simplest to consider Syndie to be released under a
+BSD-like license. Medium answer: nearly all of the code is released into the public domain, with
+some files under MIT or BSD licenses. Syndie is also linked against a library
+released under the GPL with the linking exception (which means Syndie does not
+have to be GPLed) Long answer: read the LICENSE file in the package. The relationship between Syndie and other
+efforts have been moved to their own page. While its structure leads to a large number of
+different configurations, most needs will be met by selecting one of
+the options from each of the following three criteria: *
+ reading is authorized by giving people the symmetric key or passphrase
+ to decrypt the post. Alternately, the post may include a publicly
+ visible prompt, where the correct answer serves to generate the
+ correct decryption key. **
+ posting, updating, and/or commenting is authorized by providing those
+ users with asymmetric private keys to sign the posts with, where the
+ corresponding public key is included in the forum's metadata as
+ authorized to post, manage, or comment on the forum. Alternately, the
+ signing public keys of individual authorized users may be listed in
+ the medtata. Individual posts may contain many different elements: On the whole, Syndie works at the *content layer* - individual posts are
+contained in encrypted zip files, and participating in the forum means
+simply sharing these files. There are no dependencies upon how the files
+are transferred (over I2P,
+Tor,
+Freenet,
+gnutella,
+bittorrent,
+RSS,
+usenet,
+email),
+but simple aggregation and distribution tools will be
+bundled with the standard Syndie release. Interaction with the Syndie content will occur in several ways. First,
+there is a scriptable text based interface, allowing basic command line
+and interactive reading from, writing to, managing, and synchronizing
+the forums. For instance, the following is a simple script to generate
+a new "message of the day" post - Simply pipe that through the syndie executable and the deed is done: Additionally, there is work going on for a graphical Syndie interface,
+which includes the safe rendering of plain text and HTML pages (of
+course, with support for transparent integration with Syndie's
+features). Applications based on the old Syndie's "sucker" code will enable the
+scraping and rewriting of normal web pages and web sites so that they
+can be used as single or multipage Syndie posts, including images and
+other resources as attachments. Down the line, firefox/mozilla plugins are planned to both detect and
+import Syndie formatted files and Syndie references, as well as notify
+the local Syndie GUI that a particular forum, topic, tag, author, or
+search result should be brought into focus. Of course, since Syndie is, at its core, a content layer with a defined
+file format and cryptographic algorithms, other applications or
+alternate implementations will probably be brought forward over time. The Syndie text interface is a context-sensitive
+menu driven application, and is fed commands from the standard input,
+allowing scriptable operation. The application itself can be launched with zero, one, or two parameters: The optional The menus are outlined below, with unimplemented commands prefixed by
+ The $numBytes body is an encrypted zip archive, though the encryption method
+depends upon the type line. For posts and metadata messages, the data is
+AES/256/CBC encrypted (with a 16 byte IV at the beginning). For private
+messages, the first 512 bytes are ElGamal/2048 encrypted to the channel's
+encryption key, which has the AES/256 session key and IV within it, and the
+remaining bytes are AES/256/CBC encrypted. The AES/256 encrypted area begins with a random number of nonzero padding
+bytes, followed by 0x0, then the internal payload size (as a 4 byte unsigned
+integer), followed by the total payload size (the same as the Size header),
+followed by the actual Zip encoded data, a random number of pad bytes, up to
+a 16 byte boundary, aka: After the AES/256 encrypted area there is an HMAC-SHA256 of the body section,
+using the SHA256 of the body decryption key concatenated with the IV as the
+HMAC key. The authorization signature is verified against the set of public keys
+associated with the channel. Not all messages must have valid authorization
+signatures, but unauthorized messages may not be passed along. The authentication signature may be verified against the Author header (either
+in the public or encrypted header sets), but not all messages are authenticated. The unencrypted zip archive may contain the following entries:
+ Optionally contains headers that are not visible to those who cannot decrypt
+ the message
+ Page $n's contents
+ Headers for page $n: Content-type, title, references, etc
+ Attachment $n's contents
+ Headers for attachment $n: Content-type, language, etc
+ Contains a 32x32 pixel avatar for the message or channel
+ Contains a tree of syndie references, formatted as
+ "[\t]*$name\t$uri\t$refType\t$description\n", where the tab indentation
+ at the beginning of the line determines the tree structure. The refType
+ field can, for instance, be used to differentiate mentions of a positive
+ reference and those recommending blacklisting, etc. When passing around keys for Syndie channels, they can either be transferred
+in Syndie URIs or in key files. The key files themselves
+are UTF encoded as follows: This defines the URIs safely passable within syndie, capable of referencing
+specific resources. They contain one of four reference types, plus a bencoded
+set of attributes: Syndie messages have a defined set of headers, and unknown headers are
+uninterpreted.
+ Author
+ AuthenticationMask
+ TargetChannel
+ PostURI
+ References
+ Tags
+ OverwriteURI
+ ForceNewThread
+ RefuseReplies
+ Cancel
+ Subject
+ BodyKey
+ BodyKeyPromptSalt
+ BodyKeyPrompt
+ Identity
+ EncryptKey
+ Name
+ Description
+ Edition
+ PublicPosting
+ PublicReplies
+ AuthorizedKeys
+ ManagerKeys
+ Archives
+ ChannelReadKeys
+ Expiration In the following list, Required means the header must be included
+ for messages of the allowed types. Allow as hidden means the header
+ may be included in the encrypted When referring to While many different groups often want to organize discussions into an
+online forum, the centralized nature of traditional forums (websites, BBSes,
+etc) can be a problem. For instance, the site hosting the forum can be
+taken offline through denial of service attacks or administrative action.
+In addition, the single host offers a simple point to monitor the group's
+activity, so that even if a forum is pseudonymous, those pseudonyms can be
+tied to the IP that posted or read individual messages. In addition, not only are the forums decentralized, they are organized
+in an ad-hoc manner yet fully compatible with other organization techniques.
+This means that some small group of people can run their forum using one
+technique (distributing the messages by pasting them on a wiki site),
+another can run their forum using another technique (posting their messages
+in a distributed hashtable like OpenDHT,
+yet if one person is aware of both techniques, they can synchronize the two
+forums together. This lets the people who were only aware of the wiki site
+talk to people who were only aware of the OpenDHT service without knowing
+anything about each other. Extended further, Syndie allows individual
+cells to control their own exposure while communicating across the whole
+organization. Forums can be configured so that only authorized people can read the
+content, or even know what pseudonyms are posting the messages, even if an
+adversary confiscates the servers distributing the posts. In addition,
+authorized users can prevent any unauthorized posts from being made entirely,
+or only allow them under limited circumstances. Unlike traditional forums, with Syndie you can particpate even when you
+are not online, "syncing up" any accumulated changes with the forum later
+on when it is convenient, perhaps days, weeks, or even months later. Syndie is not limited to simple text messages - individual web pages or
+full web sites can be packaged up into a single Syndie post, and using the
+offline forum functionality, you can browse that
+web site through Syndie without an active internet connection. All applications strive for security, but most do not consider
+identity or traffic pattern related information sensitive, so they do not
+bother trying to control their exposure. Syndie however is designed with
+the needs of people demanding strong anonymity and security in mind. Do whatever we can to load up the native library.
+ * If it can find a custom built jcpuid.dll / libjcpuid.so, it'll use that. Otherwise
+ * it'll try to look in the classpath for the correct library (see loadFromResource).
+ * If the user specifies -Djcpuid.enable=false it'll skip all of this. Try loading it from an explictly built jcpuid.dll / libjcpuid.so Check all of the jars in the classpath for the jcpuid dll/so.
+ * This file should be stored in the resource in the same package as this class.
+ *
+ * This is a pretty ugly hack, using the general technique illustrated by the
+ * onion FEC libraries. It works by pulling the resource, writing out the
+ * byte stream to a temporary file, loading the native library from that file,
+ * then deleting the file. A base abstract class to facilitate hash implementations. Trivial constructor for use by concrete subclasses. Returns the byte array to use as padding before completing a hash
+ * operation. Constructs the result from the contents of the current context. The block digest transformation per se. The basic visible methods of any hash algorithm. A hash (or message digest) algorithm produces its output by iterating a
+ * basic compression function on blocks of data. Returns the canonical name of this algorithm. Returns the output length in bytes of this message digest algorithm. Returns the algorithm's (inner) block size in bytes. Continues a message digest operation using the input byte. Continues a message digest operation, by filling the buffer, processing
+ * data in the algorithm's HASH_SIZE-bit block(s), updating the context and
+ * count, and buffering the remaining bytes in buffer for the next
+ * operation. Continues a message digest operation, by filling the buffer, processing
+ * data in the algorithm's HASH_SIZE-bit block(s), updating the context and
+ * count, and buffering the remaining bytes in buffer for the next
+ * operation. Completes the message digest by performing final operations such as
+ * padding and resetting the instance. Resets the current context of this instance clearing any eventually cached
+ * intermediary values. A basic test. Ensures that the digest of a pre-determined message is equal
+ * to a known pre-computed value. Returns a clone copy of this instance. Implementation of SHA2-1 [SHA-256] per the IETF Draft Specification. References: Private constructor for cloning purposes. An abstract class to facilitate implementing PRNG algorithms. Trivial constructor for use by concrete subclasses. There are some things users of this class must be aware of:
+ *
+ * References: The basic visible methods of any pseudo-random number generator. The [HAC] defines a PRNG (as implemented in this library) as follows: IMPLEMENTATION NOTE: Although all the concrete classes in this
+ * package implement the {@link Cloneable} interface, it is important to note
+ * here that such an operation, for those algorithms that use an underlting
+ * symmetric key block cipher, DOES NOT clone any session key material
+ * that may have been used in initialising the source PRNG (the instance to be
+ * cloned). Instead a clone of an already initialised PRNG, that uses and
+ * underlying symmetric key block cipher, is another instance with a clone of
+ * the same cipher that operates with the same block size but without any
+ * knowledge of neither key material nor key size. References: Returns the canonical name of this instance. Initialises the pseudo-random number generator scheme with the
+ * appropriate attributes. Returns the next 8 bits of random data generated from this instance. Fills the designated byte array, starting from byte at index
+ * Supplement, or possibly replace, the random state of this PRNG with
+ * a random byte. Implementations are not required to implement this method in any
+ * meaningful way; this may be a no-operation, and implementations may
+ * throw an {@link UnsupportedOperationException}. Supplement, or possibly replace, the random state of this PRNG with
+ * a sequence of new random bytes. Implementations are not required to implement this method in any
+ * meaningful way; this may be a no-operation, and implementations may
+ * throw an {@link UnsupportedOperationException}. Supplement, or possibly replace, the random state of this PRNG with
+ * a sequence of new random bytes. Implementations are not required to implement this method in any
+ * meaningful way; this may be a no-operation, and implementations may
+ * throw an {@link UnsupportedOperationException}. Returns a clone copy of this instance. Provide a base scope for accessing singletons that I2P exposes. Rather than
+ * using the traditional singleton, where any component can access the component
+ * in question directly, all of those I2P related singletons are exposed through
+ * a particular I2PAppContext. This helps not only with understanding their use
+ * and the components I2P exposes, but it also allows multiple isolated
+ * environments to operate concurrently within the same JVM - particularly useful
+ * for stubbing out implementations of the rooted components and simulating the
+ * software's interaction between multiple instances.
+ *
+ * Rijndael was written by Vincent
+ * Rijmen and Joan Daemen.
+ *
+ * Portions of this code are Copyright © 1997, 1998
+ * Systemics Ltd on behalf of the
+ * Cryptix Development Team.
+ *
+ *
+ * @author Raif S. Naffah
+ * @author Paulo S. L. M. Barreto
+ *
+ * License is apparently available from http://www.cryptix.org/docs/license.html
+ */
+public final class CryptixRijndael_Algorithm // implicit no-argument constructor
+{
+ // Debugging methods and variables
+ //...........................................................................
+
+ static final String _NAME = "Rijndael_Algorithm";
+ static final boolean _IN = true, _OUT = false;
+
+ static final boolean _RDEBUG = false;
+ static final int _debuglevel = 0; // RDEBUG ? Rijndael_Properties.getLevel(NAME): 0;
+ // static final PrintWriter err = RDEBUG ? Rijndael_Properties.getOutput() : null;
+ static final PrintWriter _err = new PrintWriter(new java.io.OutputStreamWriter(System.err));
+
+ static final boolean _TRACE = false; // Rijndael_Properties.isTraceable(NAME);
+
+ static void debug(String s) {
+ _err.println(">>> " + _NAME + ": " + s);
+ }
+
+ static void trace(boolean in, String s) {
+ if (_TRACE) _err.println((in ? "==> " : "<== ") + _NAME + "." + s);
+ }
+
+ static void trace(String s) {
+ if (_TRACE) _err.println("<=> " + _NAME + "." + s);
+ }
+
+ // Constants and variables
+ //...........................................................................
+
+ static final int _BLOCK_SIZE = 16; // default block size in bytes
+
+ static final int[] _alog = new int[256];
+ static final int[] _log = new int[256];
+
+ static final byte[] _S = new byte[256];
+ static final byte[] _Si = new byte[256];
+ static final int[] _T1 = new int[256];
+ static final int[] _T2 = new int[256];
+ static final int[] _T3 = new int[256];
+ static final int[] _T4 = new int[256];
+ static final int[] _T5 = new int[256];
+ static final int[] _T6 = new int[256];
+ static final int[] _T7 = new int[256];
+ static final int[] _T8 = new int[256];
+ static final int[] _U1 = new int[256];
+ static final int[] _U2 = new int[256];
+ static final int[] _U3 = new int[256];
+ static final int[] _U4 = new int[256];
+ static final byte[] _rcon = new byte[30];
+
+ static final int[][][] _shifts = new int[][][] { { { 0, 0}, { 1, 3}, { 2, 2}, { 3, 1}},
+ { { 0, 0}, { 1, 5}, { 2, 4}, { 3, 3}},
+ { { 0, 0}, { 1, 7}, { 3, 5}, { 4, 4}}};
+
+ private static final char[] _HEX_DIGITS = { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'A', 'B', 'C', 'D',
+ 'E', 'F'};
+
+ // Static code - to intialise S-boxes and T-boxes
+ //...........................................................................
+
+ static {
+ long time = Clock.getInstance().now();
+
+ if (_RDEBUG && _debuglevel > 6) {
+ System.out.println("Algorithm Name: Rijndael ver 0.1");
+ System.out.println("Electronic Codebook (ECB) Mode");
+ System.out.println();
+ }
+ int ROOT = 0x11B;
+ int i, j = 0;
+
+ //
+ // produce log and alog tables, needed for multiplying in the
+ // field GF(2^m) (generator = 3)
+ //
+ _alog[0] = 1;
+ for (i = 1; i < 256; i++) {
+ j = (_alog[i - 1] << 1) ^ _alog[i - 1];
+ if ((j & 0x100) != 0) j ^= ROOT;
+ _alog[i] = j;
+ }
+ for (i = 1; i < 255; i++)
+ _log[_alog[i]] = i;
+ byte[][] A = new byte[][] { { 1, 1, 1, 1, 1, 0, 0, 0}, { 0, 1, 1, 1, 1, 1, 0, 0}, { 0, 0, 1, 1, 1, 1, 1, 0},
+ { 0, 0, 0, 1, 1, 1, 1, 1}, { 1, 0, 0, 0, 1, 1, 1, 1}, { 1, 1, 0, 0, 0, 1, 1, 1},
+ { 1, 1, 1, 0, 0, 0, 1, 1}, { 1, 1, 1, 1, 0, 0, 0, 1}};
+ byte[] B = new byte[] { 0, 1, 1, 0, 0, 0, 1, 1};
+
+ //
+ // substitution box based on F^{-1}(x)
+ //
+ int t;
+ byte[][] box = new byte[256][8];
+ box[1][7] = 1;
+ for (i = 2; i < 256; i++) {
+ j = _alog[255 - _log[i]];
+ for (t = 0; t < 8; t++)
+ box[i][t] = (byte) ((j >>> (7 - t)) & 0x01);
+ }
+ //
+ // affine transform: box[i] <- B + A*box[i]
+ //
+ byte[][] cox = new byte[256][8];
+ for (i = 0; i < 256; i++)
+ for (t = 0; t < 8; t++) {
+ cox[i][t] = B[t];
+ for (j = 0; j < 8; j++)
+ cox[i][t] ^= A[t][j] * box[i][j];
+ }
+ //
+ // S-boxes and inverse S-boxes
+ //
+ for (i = 0; i < 256; i++) {
+ _S[i] = (byte) (cox[i][0] << 7);
+ for (t = 1; t < 8; t++)
+ _S[i] ^= cox[i][t] << (7 - t);
+ _Si[_S[i] & 0xFF] = (byte) i;
+ }
+ //
+ // T-boxes
+ //
+ byte[][] G = new byte[][] { { 2, 1, 1, 3}, { 3, 2, 1, 1}, { 1, 3, 2, 1}, { 1, 1, 3, 2}};
+ byte[][] AA = new byte[4][8];
+ for (i = 0; i < 4; i++) {
+ for (j = 0; j < 4; j++)
+ AA[i][j] = G[i][j];
+ AA[i][i + 4] = 1;
+ }
+ byte pivot, tmp;
+ byte[][] iG = new byte[4][4];
+ for (i = 0; i < 4; i++) {
+ pivot = AA[i][i];
+ if (pivot == 0) {
+ t = i + 1;
+ while ((AA[t][i] == 0) && (t < 4))
+ t++;
+ if (t == 4)
+ throw new RuntimeException("G matrix is not invertible");
+
+ for (j = 0; j < 8; j++) {
+ tmp = AA[i][j];
+ AA[i][j] = AA[t][j];
+ AA[t][j] = tmp;
+ }
+ pivot = AA[i][i];
+ }
+ for (j = 0; j < 8; j++)
+ if (AA[i][j] != 0) AA[i][j] = (byte) _alog[(255 + _log[AA[i][j] & 0xFF] - _log[pivot & 0xFF]) % 255];
+ for (t = 0; t < 4; t++)
+ if (i != t) {
+ for (j = i + 1; j < 8; j++)
+ AA[t][j] ^= mul(AA[i][j], AA[t][i]);
+ AA[t][i] = 0;
+ }
+ }
+ for (i = 0; i < 4; i++)
+ for (j = 0; j < 4; j++)
+ iG[i][j] = AA[i][j + 4];
+
+ int s;
+ for (t = 0; t < 256; t++) {
+ s = _S[t];
+ _T1[t] = mul4(s, G[0]);
+ _T2[t] = mul4(s, G[1]);
+ _T3[t] = mul4(s, G[2]);
+ _T4[t] = mul4(s, G[3]);
+
+ s = _Si[t];
+ _T5[t] = mul4(s, iG[0]);
+ _T6[t] = mul4(s, iG[1]);
+ _T7[t] = mul4(s, iG[2]);
+ _T8[t] = mul4(s, iG[3]);
+
+ _U1[t] = mul4(t, iG[0]);
+ _U2[t] = mul4(t, iG[1]);
+ _U3[t] = mul4(t, iG[2]);
+ _U4[t] = mul4(t, iG[3]);
+ }
+ //
+ // round constants
+ //
+ _rcon[0] = 1;
+ int r = 1;
+ for (t = 1; t < 30;)
+ _rcon[t++] = (byte) (r = mul(2, r));
+
+ time = Clock.getInstance().now() - time;
+
+ if (_RDEBUG && _debuglevel > 8) {
+ System.out.println("==========");
+ System.out.println();
+ System.out.println("Static Data");
+ System.out.println();
+ System.out.println("S[]:");
+ for (i = 0; i < 16; i++) {
+ for (j = 0; j < 16; j++)
+ System.out.print("0x" + byteToString(_S[i * 16 + j]) + ", ");
+ System.out.println();
+ }
+ System.out.println();
+ System.out.println("Si[]:");
+ for (i = 0; i < 16; i++) {
+ for (j = 0; j < 16; j++)
+ System.out.print("0x" + byteToString(_Si[i * 16 + j]) + ", ");
+ System.out.println();
+ }
+
+ System.out.println();
+ System.out.println("iG[]:");
+ for (i = 0; i < 4; i++) {
+ for (j = 0; j < 4; j++)
+ System.out.print("0x" + byteToString(iG[i][j]) + ", ");
+ System.out.println();
+ }
+
+ System.out.println();
+ System.out.println("T1[]:");
+ for (i = 0; i < 64; i++) {
+ for (j = 0; j < 4; j++)
+ System.out.print("0x" + intToString(_T1[i * 4 + j]) + ", ");
+ System.out.println();
+ }
+ System.out.println();
+ System.out.println("T2[]:");
+ for (i = 0; i < 64; i++) {
+ for (j = 0; j < 4; j++)
+ System.out.print("0x" + intToString(_T2[i * 4 + j]) + ", ");
+ System.out.println();
+ }
+ System.out.println();
+ System.out.println("T3[]:");
+ for (i = 0; i < 64; i++) {
+ for (j = 0; j < 4; j++)
+ System.out.print("0x" + intToString(_T3[i * 4 + j]) + ", ");
+ System.out.println();
+ }
+ System.out.println();
+ System.out.println("T4[]:");
+ for (i = 0; i < 64; i++) {
+ for (j = 0; j < 4; j++)
+ System.out.print("0x" + intToString(_T4[i * 4 + j]) + ", ");
+ System.out.println();
+ }
+ System.out.println();
+ System.out.println("T5[]:");
+ for (i = 0; i < 64; i++) {
+ for (j = 0; j < 4; j++)
+ System.out.print("0x" + intToString(_T5[i * 4 + j]) + ", ");
+ System.out.println();
+ }
+ System.out.println();
+ System.out.println("T6[]:");
+ for (i = 0; i < 64; i++) {
+ for (j = 0; j < 4; j++)
+ System.out.print("0x" + intToString(_T6[i * 4 + j]) + ", ");
+ System.out.println();
+ }
+ System.out.println();
+ System.out.println("T7[]:");
+ for (i = 0; i < 64; i++) {
+ for (j = 0; j < 4; j++)
+ System.out.print("0x" + intToString(_T7[i * 4 + j]) + ", ");
+ System.out.println();
+ }
+ System.out.println();
+ System.out.println("T8[]:");
+ for (i = 0; i < 64; i++) {
+ for (j = 0; j < 4; j++)
+ System.out.print("0x" + intToString(_T8[i * 4 + j]) + ", ");
+ System.out.println();
+ }
+
+ System.out.println();
+ System.out.println("U1[]:");
+ for (i = 0; i < 64; i++) {
+ for (j = 0; j < 4; j++)
+ System.out.print("0x" + intToString(_U1[i * 4 + j]) + ", ");
+ System.out.println();
+ }
+ System.out.println();
+ System.out.println("U2[]:");
+ for (i = 0; i < 64; i++) {
+ for (j = 0; j < 4; j++)
+ System.out.print("0x" + intToString(_U2[i * 4 + j]) + ", ");
+ System.out.println();
+ }
+ System.out.println();
+ System.out.println("U3[]:");
+ for (i = 0; i < 64; i++) {
+ for (j = 0; j < 4; j++)
+ System.out.print("0x" + intToString(_U3[i * 4 + j]) + ", ");
+ System.out.println();
+ }
+ System.out.println();
+ System.out.println("U4[]:");
+ for (i = 0; i < 64; i++) {
+ for (j = 0; j < 4; j++)
+ System.out.print("0x" + intToString(_U4[i * 4 + j]) + ", ");
+ System.out.println();
+ }
+
+ System.out.println();
+ System.out.println("rcon[]:");
+ for (i = 0; i < 5; i++) {
+ for (j = 0; j < 6; j++)
+ System.out.print("0x" + byteToString(_rcon[i * 6 + j]) + ", ");
+ System.out.println();
+ }
+
+ System.out.println();
+ System.out.println("Total initialization time: " + time + " ms.");
+ System.out.println();
+ }
+ }
+
+ // multiply two elements of GF(2^m)
+ static final int mul(int a, int b) {
+ return (a != 0 && b != 0) ? _alog[(_log[a & 0xFF] + _log[b & 0xFF]) % 255] : 0;
+ }
+
+ // convenience method used in generating Transposition boxes
+ static final int mul4(int a, byte[] b) {
+ if (a == 0) return 0;
+ a = _log[a & 0xFF];
+ int a0 = (b[0] != 0) ? _alog[(a + _log[b[0] & 0xFF]) % 255] & 0xFF : 0;
+ int a1 = (b[1] != 0) ? _alog[(a + _log[b[1] & 0xFF]) % 255] & 0xFF : 0;
+ int a2 = (b[2] != 0) ? _alog[(a + _log[b[2] & 0xFF]) % 255] & 0xFF : 0;
+ int a3 = (b[3] != 0) ? _alog[(a + _log[b[3] & 0xFF]) % 255] & 0xFF : 0;
+ return a0 << 24 | a1 << 16 | a2 << 8 | a3;
+ }
+
+ // Basic API methods
+ //...........................................................................
+
+ /**
+ * Convenience method to expand a user-supplied key material into a
+ * session key, assuming Rijndael's default block size (128-bit).
+ *
+ * @param k The 128/192/256-bit user-key to use.
+ * @exception InvalidKeyException If the key is invalid.
+ */
+ public static final Object makeKey(byte[] k) throws InvalidKeyException {
+ return makeKey(k, _BLOCK_SIZE);
+ }
+
+ /**
+ * Convenience method to encrypt exactly one block of plaintext, assuming
+ * Rijndael's default block size (128-bit).
+ *
+ * @param in The plaintext.
+ * @param result The resulting ciphertext.
+ * @param inOffset Index of in from which to start considering data.
+ * @param sessionKey The session key to use for encryption.
+ */
+ public static final void blockEncrypt(byte[] in, byte[] result, int inOffset, int outOffset, Object sessionKey) {
+ if (_RDEBUG) trace(_IN, "blockEncrypt(" + in + ", " + inOffset + ", " + sessionKey + ")");
+ int[][] Ke = (int[][]) ((Object[]) sessionKey)[0]; // extract encryption round keys
+ int ROUNDS = Ke.length - 1;
+ int[] Ker = Ke[0];
+
+ // plaintext to ints + key
+ int t0 = ((in[inOffset++] & 0xFF) << 24 | (in[inOffset++] & 0xFF) << 16 | (in[inOffset++] & 0xFF) << 8 | (in[inOffset++] & 0xFF))
+ ^ Ker[0];
+ int t1 = ((in[inOffset++] & 0xFF) << 24 | (in[inOffset++] & 0xFF) << 16 | (in[inOffset++] & 0xFF) << 8 | (in[inOffset++] & 0xFF))
+ ^ Ker[1];
+ int t2 = ((in[inOffset++] & 0xFF) << 24 | (in[inOffset++] & 0xFF) << 16 | (in[inOffset++] & 0xFF) << 8 | (in[inOffset++] & 0xFF))
+ ^ Ker[2];
+ int t3 = ((in[inOffset++] & 0xFF) << 24 | (in[inOffset++] & 0xFF) << 16 | (in[inOffset++] & 0xFF) << 8 | (in[inOffset++] & 0xFF))
+ ^ Ker[3];
+
+ int a0, a1, a2, a3;
+ for (int r = 1; r < ROUNDS; r++) { // apply round transforms
+ Ker = Ke[r];
+ a0 = (_T1[(t0 >>> 24) & 0xFF] ^ _T2[(t1 >>> 16) & 0xFF] ^ _T3[(t2 >>> 8) & 0xFF] ^ _T4[t3 & 0xFF]) ^ Ker[0];
+ a1 = (_T1[(t1 >>> 24) & 0xFF] ^ _T2[(t2 >>> 16) & 0xFF] ^ _T3[(t3 >>> 8) & 0xFF] ^ _T4[t0 & 0xFF]) ^ Ker[1];
+ a2 = (_T1[(t2 >>> 24) & 0xFF] ^ _T2[(t3 >>> 16) & 0xFF] ^ _T3[(t0 >>> 8) & 0xFF] ^ _T4[t1 & 0xFF]) ^ Ker[2];
+ a3 = (_T1[(t3 >>> 24) & 0xFF] ^ _T2[(t0 >>> 16) & 0xFF] ^ _T3[(t1 >>> 8) & 0xFF] ^ _T4[t2 & 0xFF]) ^ Ker[3];
+ t0 = a0;
+ t1 = a1;
+ t2 = a2;
+ t3 = a3;
+ if (_RDEBUG && _debuglevel > 6)
+ System.out.println("CT" + r + "=" + intToString(t0) + intToString(t1) + intToString(t2)
+ + intToString(t3));
+ }
+
+ // last round is special
+ Ker = Ke[ROUNDS];
+ int tt = Ker[0];
+ result[outOffset++] = (byte) (_S[(t0 >>> 24) & 0xFF] ^ (tt >>> 24));
+ result[outOffset++] = (byte) (_S[(t1 >>> 16) & 0xFF] ^ (tt >>> 16));
+ result[outOffset++] = (byte) (_S[(t2 >>> 8) & 0xFF] ^ (tt >>> 8));
+ result[outOffset++] = (byte) (_S[t3 & 0xFF] ^ tt);
+ tt = Ker[1];
+ result[outOffset++] = (byte) (_S[(t1 >>> 24) & 0xFF] ^ (tt >>> 24));
+ result[outOffset++] = (byte) (_S[(t2 >>> 16) & 0xFF] ^ (tt >>> 16));
+ result[outOffset++] = (byte) (_S[(t3 >>> 8) & 0xFF] ^ (tt >>> 8));
+ result[outOffset++] = (byte) (_S[t0 & 0xFF] ^ tt);
+ tt = Ker[2];
+ result[outOffset++] = (byte) (_S[(t2 >>> 24) & 0xFF] ^ (tt >>> 24));
+ result[outOffset++] = (byte) (_S[(t3 >>> 16) & 0xFF] ^ (tt >>> 16));
+ result[outOffset++] = (byte) (_S[(t0 >>> 8) & 0xFF] ^ (tt >>> 8));
+ result[outOffset++] = (byte) (_S[t1 & 0xFF] ^ tt);
+ tt = Ker[3];
+ result[outOffset++] = (byte) (_S[(t3 >>> 24) & 0xFF] ^ (tt >>> 24));
+ result[outOffset++] = (byte) (_S[(t0 >>> 16) & 0xFF] ^ (tt >>> 16));
+ result[outOffset++] = (byte) (_S[(t1 >>> 8) & 0xFF] ^ (tt >>> 8));
+ result[outOffset++] = (byte) (_S[t2 & 0xFF] ^ tt);
+ if (_RDEBUG && _debuglevel > 6) {
+ System.out.println("CT=" + toString(result));
+ System.out.println();
+ }
+ if (_RDEBUG) trace(_OUT, "blockEncrypt()");
+ }
+
+ /**
+ * Convenience method to decrypt exactly one block of plaintext, assuming
+ * Rijndael's default block size (128-bit).
+ *
+ * @param in The ciphertext.
+ * @param result The resulting ciphertext
+ * @param inOffset Index of in from which to start considering data.
+ * @param sessionKey The session key to use for decryption.
+ */
+ public static final void blockDecrypt(byte[] in, byte[] result, int inOffset, int outOffset, Object sessionKey) {
+ if (in.length - inOffset > result.length - outOffset)
+ throw new IllegalArgumentException("result too small: in.len=" + in.length + " in.offset=" + inOffset
+ + " result.len=" + result.length + " result.offset=" + outOffset);
+ if (in.length - inOffset <= 15)
+ throw new IllegalArgumentException("data too small: " + in.length + " inOffset: " + inOffset);
+ if (_RDEBUG) trace(_IN, "blockDecrypt(" + in + ", " + inOffset + ", " + sessionKey + ")");
+ int[][] Kd = (int[][]) ((Object[]) sessionKey)[1]; // extract decryption round keys
+ int ROUNDS = Kd.length - 1;
+ int[] Kdr = Kd[0];
+
+ // ciphertext to ints + key
+ int t0 = ((in[inOffset++] & 0xFF) << 24 | (in[inOffset++] & 0xFF) << 16 | (in[inOffset++] & 0xFF) << 8 | (in[inOffset++] & 0xFF))
+ ^ Kdr[0];
+ int t1 = ((in[inOffset++] & 0xFF) << 24 | (in[inOffset++] & 0xFF) << 16 | (in[inOffset++] & 0xFF) << 8 | (in[inOffset++] & 0xFF))
+ ^ Kdr[1];
+ int t2 = ((in[inOffset++] & 0xFF) << 24 | (in[inOffset++] & 0xFF) << 16 | (in[inOffset++] & 0xFF) << 8 | (in[inOffset++] & 0xFF))
+ ^ Kdr[2];
+ int t3 = ((in[inOffset++] & 0xFF) << 24 | (in[inOffset++] & 0xFF) << 16 | (in[inOffset++] & 0xFF) << 8 | (in[inOffset++] & 0xFF))
+ ^ Kdr[3];
+
+ int a0, a1, a2, a3;
+ for (int r = 1; r < ROUNDS; r++) { // apply round transforms
+ Kdr = Kd[r];
+ a0 = (_T5[(t0 >>> 24) & 0xFF] ^ _T6[(t3 >>> 16) & 0xFF] ^ _T7[(t2 >>> 8) & 0xFF] ^ _T8[t1 & 0xFF]) ^ Kdr[0];
+ a1 = (_T5[(t1 >>> 24) & 0xFF] ^ _T6[(t0 >>> 16) & 0xFF] ^ _T7[(t3 >>> 8) & 0xFF] ^ _T8[t2 & 0xFF]) ^ Kdr[1];
+ a2 = (_T5[(t2 >>> 24) & 0xFF] ^ _T6[(t1 >>> 16) & 0xFF] ^ _T7[(t0 >>> 8) & 0xFF] ^ _T8[t3 & 0xFF]) ^ Kdr[2];
+ a3 = (_T5[(t3 >>> 24) & 0xFF] ^ _T6[(t2 >>> 16) & 0xFF] ^ _T7[(t1 >>> 8) & 0xFF] ^ _T8[t0 & 0xFF]) ^ Kdr[3];
+ t0 = a0;
+ t1 = a1;
+ t2 = a2;
+ t3 = a3;
+ if (_RDEBUG && _debuglevel > 6)
+ System.out.println("PT" + r + "=" + intToString(t0) + intToString(t1) + intToString(t2)
+ + intToString(t3));
+ }
+
+ // last round is special
+ Kdr = Kd[ROUNDS];
+ int tt = Kdr[0];
+ result[outOffset++] = (byte) (_Si[(t0 >>> 24) & 0xFF] ^ (tt >>> 24));
+ result[outOffset++] = (byte) (_Si[(t3 >>> 16) & 0xFF] ^ (tt >>> 16));
+ result[outOffset++] = (byte) (_Si[(t2 >>> 8) & 0xFF] ^ (tt >>> 8));
+ result[outOffset++] = (byte) (_Si[t1 & 0xFF] ^ tt);
+ tt = Kdr[1];
+ result[outOffset++] = (byte) (_Si[(t1 >>> 24) & 0xFF] ^ (tt >>> 24));
+ result[outOffset++] = (byte) (_Si[(t0 >>> 16) & 0xFF] ^ (tt >>> 16));
+ result[outOffset++] = (byte) (_Si[(t3 >>> 8) & 0xFF] ^ (tt >>> 8));
+ result[outOffset++] = (byte) (_Si[t2 & 0xFF] ^ tt);
+ tt = Kdr[2];
+ result[outOffset++] = (byte) (_Si[(t2 >>> 24) & 0xFF] ^ (tt >>> 24));
+ result[outOffset++] = (byte) (_Si[(t1 >>> 16) & 0xFF] ^ (tt >>> 16));
+ result[outOffset++] = (byte) (_Si[(t0 >>> 8) & 0xFF] ^ (tt >>> 8));
+ result[outOffset++] = (byte) (_Si[t3 & 0xFF] ^ tt);
+ tt = Kdr[3];
+ result[outOffset++] = (byte) (_Si[(t3 >>> 24) & 0xFF] ^ (tt >>> 24));
+ result[outOffset++] = (byte) (_Si[(t2 >>> 16) & 0xFF] ^ (tt >>> 16));
+ result[outOffset++] = (byte) (_Si[(t1 >>> 8) & 0xFF] ^ (tt >>> 8));
+ result[outOffset++] = (byte) (_Si[t0 & 0xFF] ^ tt);
+ if (_RDEBUG && _debuglevel > 6) {
+ System.out.println("PT=" + toString(result));
+ System.out.println();
+ }
+ if (_RDEBUG) trace(_OUT, "blockDecrypt()");
+ }
+
+ /** A basic symmetric encryption/decryption test. */
+ public static boolean self_test() {
+ return self_test(_BLOCK_SIZE);
+ }
+
+ // Rijndael own methods
+ //...........................................................................
+
+ /** @return The default length in bytes of the Algorithm input block. */
+ public static final int blockSize() {
+ return _BLOCK_SIZE;
+ }
+
+ /**
+ * Expand a user-supplied key material into a session key.
+ *
+ * @param k The 128/192/256-bit user-key to use.
+ * @param blockSize The block size in bytes of this Rijndael.
+ * @exception InvalidKeyException If the key is invalid.
+ */
+ public static final/* synchronized */Object makeKey(byte[] k, int blockSize) throws InvalidKeyException {
+ return makeKey(k, blockSize, null);
+ }
+ public static final/* synchronized */Object makeKey(byte[] k, int blockSize, CryptixAESKeyCache.KeyCacheEntry keyData) throws InvalidKeyException {
+ if (_RDEBUG) trace(_IN, "makeKey(" + k + ", " + blockSize + ")");
+ if (k == null) throw new InvalidKeyException("Empty key");
+ if (!(k.length == 16 || k.length == 24 || k.length == 32))
+ throw new InvalidKeyException("Incorrect key length");
+ int ROUNDS = getRounds(k.length, blockSize);
+ int BC = blockSize / 4;
+ int[][] Ke = null; // new int[ROUNDS + 1][BC]; // encryption round keys
+ int[][] Kd = null; // new int[ROUNDS + 1][BC]; // decryption round keys
+ int ROUND_KEY_COUNT = (ROUNDS + 1) * BC;
+ int KC = k.length / 4;
+ int[] tk = null; // new int[KC];
+ int i, j;
+
+ if (keyData == null) {
+ Ke = new int[ROUNDS + 1][BC];
+ Kd = new int[ROUNDS + 1][BC];
+ tk = new int[KC];
+ } else {
+ Ke = keyData.Ke;
+ Kd = keyData.Kd;
+ tk = keyData.tk;
+ }
+
+ // copy user material bytes into temporary ints
+ for (i = 0, j = 0; i < KC;)
+ tk[i++] = (k[j++] & 0xFF) << 24 | (k[j++] & 0xFF) << 16 | (k[j++] & 0xFF) << 8 | (k[j++] & 0xFF);
+ // copy values into round key arrays
+ int t = 0;
+ for (j = 0; (j < KC) && (t < ROUND_KEY_COUNT); j++, t++) {
+ Ke[t / BC][t % BC] = tk[j];
+ Kd[ROUNDS - (t / BC)][t % BC] = tk[j];
+ }
+ int tt, rconpointer = 0;
+ while (t < ROUND_KEY_COUNT) {
+ // extrapolate using phi (the round key evolution function)
+ tt = tk[KC - 1];
+ tk[0] ^= (_S[(tt >>> 16) & 0xFF] & 0xFF) << 24 ^ (_S[(tt >>> 8) & 0xFF] & 0xFF) << 16
+ ^ (_S[tt & 0xFF] & 0xFF) << 8 ^ (_S[(tt >>> 24) & 0xFF] & 0xFF)
+ ^ (_rcon[rconpointer++] & 0xFF) << 24;
+ if (KC != 8)
+ for (i = 1, j = 0; i < KC;) {
+ //tk[i++] ^= tk[j++];
+ // The above line replaced with the code below in order to work around
+ // a bug in the kjc-1.4F java compiler (which has been reported).
+ tk[i] ^= tk[j++];
+ i++;
+ }
+ else {
+ for (i = 1, j = 0; i < KC / 2;) {
+ //tk[i++] ^= tk[j++];
+ // The above line replaced with the code below in order to work around
+ // a bug in the kjc-1.4F java compiler (which has been reported).
+ tk[i] ^= tk[j++];
+ i++;
+ }
+ tt = tk[KC / 2 - 1];
+ tk[KC / 2] ^= (_S[tt & 0xFF] & 0xFF) ^ (_S[(tt >>> 8) & 0xFF] & 0xFF) << 8
+ ^ (_S[(tt >>> 16) & 0xFF] & 0xFF) << 16 ^ (_S[(tt >>> 24) & 0xFF] & 0xFF) << 24;
+ for (j = KC / 2, i = j + 1; i < KC;) {
+ //tk[i++] ^= tk[j++];
+ // The above line replaced with the code below in order to work around
+ // a bug in the kjc-1.4F java compiler (which has been reported).
+ tk[i] ^= tk[j++];
+ i++;
+ }
+ }
+ // copy values into round key arrays
+ for (j = 0; (j < KC) && (t < ROUND_KEY_COUNT); j++, t++) {
+ Ke[t / BC][t % BC] = tk[j];
+ Kd[ROUNDS - (t / BC)][t % BC] = tk[j];
+ }
+ }
+ for (int r = 1; r < ROUNDS; r++)
+ // inverse MixColumn where needed
+ for (j = 0; j < BC; j++) {
+ tt = Kd[r][j];
+ Kd[r][j] = _U1[(tt >>> 24) & 0xFF] ^ _U2[(tt >>> 16) & 0xFF] ^ _U3[(tt >>> 8) & 0xFF] ^ _U4[tt & 0xFF];
+ }
+ // assemble the encryption (Ke) and decryption (Kd) round keys into
+ // one sessionKey object
+ Object[] sessionKey = null;
+ if (keyData == null)
+ sessionKey = new Object[] { Ke, Kd};
+ else
+ sessionKey = keyData.key;
+ if (_RDEBUG) trace(_OUT, "makeKey()");
+ return sessionKey;
+ }
+
+ /**
+ * Encrypt exactly one block of plaintext.
+ *
+ * @param in The plaintext.
+ * @param result The resulting ciphertext.
+ * @param inOffset Index of in from which to start considering data.
+ * @param sessionKey The session key to use for encryption.
+ * @param blockSize The block size in bytes of this Rijndael.
+ */
+ public static final void blockEncrypt(byte[] in, byte[] result, int inOffset, int outOffset, Object sessionKey, int blockSize) {
+ if (blockSize == _BLOCK_SIZE) {
+ blockEncrypt(in, result, inOffset, outOffset, sessionKey);
+ return;
+ }
+ if (_RDEBUG) trace(_IN, "blockEncrypt(" + in + ", " + inOffset + ", " + sessionKey + ", " + blockSize + ")");
+ Object[] sKey = (Object[]) sessionKey; // extract encryption round keys
+ int[][] Ke = (int[][]) sKey[0];
+
+ int BC = blockSize / 4;
+ int ROUNDS = Ke.length - 1;
+ int SC = BC == 4 ? 0 : (BC == 6 ? 1 : 2);
+ int s1 = _shifts[SC][1][0];
+ int s2 = _shifts[SC][2][0];
+ int s3 = _shifts[SC][3][0];
+ int[] a = new int[BC];
+ int[] t = new int[BC]; // temporary work array
+ int i;
+ int j = outOffset;
+ int tt;
+
+ for (i = 0; i < BC; i++)
+ // plaintext to ints + key
+ t[i] = ((in[inOffset++] & 0xFF) << 24 | (in[inOffset++] & 0xFF) << 16 | (in[inOffset++] & 0xFF) << 8 | (in[inOffset++] & 0xFF))
+ ^ Ke[0][i];
+ for (int r = 1; r < ROUNDS; r++) { // apply round transforms
+ for (i = 0; i < BC; i++)
+ a[i] = (_T1[(t[i] >>> 24) & 0xFF] ^ _T2[(t[(i + s1) % BC] >>> 16) & 0xFF]
+ ^ _T3[(t[(i + s2) % BC] >>> 8) & 0xFF] ^ _T4[t[(i + s3) % BC] & 0xFF])
+ ^ Ke[r][i];
+ System.arraycopy(a, 0, t, 0, BC);
+ if (_RDEBUG && _debuglevel > 6) System.out.println("CT" + r + "=" + toString(t));
+ }
+ for (i = 0; i < BC; i++) { // last round is special
+ tt = Ke[ROUNDS][i];
+ result[j++] = (byte) (_S[(t[i] >>> 24) & 0xFF] ^ (tt >>> 24));
+ result[j++] = (byte) (_S[(t[(i + s1) % BC] >>> 16) & 0xFF] ^ (tt >>> 16));
+ result[j++] = (byte) (_S[(t[(i + s2) % BC] >>> 8) & 0xFF] ^ (tt >>> 8));
+ result[j++] = (byte) (_S[t[(i + s3) % BC] & 0xFF] ^ tt);
+ }
+ if (_RDEBUG && _debuglevel > 6) {
+ System.out.println("CT=" + toString(result));
+ System.out.println();
+ }
+ if (_RDEBUG) trace(_OUT, "blockEncrypt()");
+ }
+
+ /**
+ * Decrypt exactly one block of ciphertext.
+ *
+ * @param in The ciphertext.
+ * @param result The resulting ciphertext.
+ * @param inOffset Index of in from which to start considering data.
+ * @param sessionKey The session key to use for decryption.
+ * @param blockSize The block size in bytes of this Rijndael.
+ */
+ public static final void blockDecrypt(byte[] in, byte[] result, int inOffset, int outOffset, Object sessionKey, int blockSize) {
+ if (blockSize == _BLOCK_SIZE) {
+ blockDecrypt(in, result, inOffset, outOffset, sessionKey);
+ return;
+ }
+
+ if (_RDEBUG) trace(_IN, "blockDecrypt(" + in + ", " + inOffset + ", " + sessionKey + ", " + blockSize + ")");
+ Object[] sKey = (Object[]) sessionKey; // extract decryption round keys
+ int[][] Kd = (int[][]) sKey[1];
+
+ int BC = blockSize / 4;
+ int ROUNDS = Kd.length - 1;
+ int SC = BC == 4 ? 0 : (BC == 6 ? 1 : 2);
+ int s1 = _shifts[SC][1][1];
+ int s2 = _shifts[SC][2][1];
+ int s3 = _shifts[SC][3][1];
+ int[] a = new int[BC];
+ int[] t = new int[BC]; // temporary work array
+ int i;
+ int j = outOffset;
+ int tt;
+
+ for (i = 0; i < BC; i++)
+ // ciphertext to ints + key
+ t[i] = ((in[inOffset++] & 0xFF) << 24 | (in[inOffset++] & 0xFF) << 16 | (in[inOffset++] & 0xFF) << 8 | (in[inOffset++] & 0xFF))
+ ^ Kd[0][i];
+ for (int r = 1; r < ROUNDS; r++) { // apply round transforms
+ for (i = 0; i < BC; i++)
+ a[i] = (_T5[(t[i] >>> 24) & 0xFF] ^ _T6[(t[(i + s1) % BC] >>> 16) & 0xFF]
+ ^ _T7[(t[(i + s2) % BC] >>> 8) & 0xFF] ^ _T8[t[(i + s3) % BC] & 0xFF])
+ ^ Kd[r][i];
+ System.arraycopy(a, 0, t, 0, BC);
+ if (_RDEBUG && _debuglevel > 6) System.out.println("PT" + r + "=" + toString(t));
+ }
+ for (i = 0; i < BC; i++) { // last round is special
+ tt = Kd[ROUNDS][i];
+ result[j++] = (byte) (_Si[(t[i] >>> 24) & 0xFF] ^ (tt >>> 24));
+ result[j++] = (byte) (_Si[(t[(i + s1) % BC] >>> 16) & 0xFF] ^ (tt >>> 16));
+ result[j++] = (byte) (_Si[(t[(i + s2) % BC] >>> 8) & 0xFF] ^ (tt >>> 8));
+ result[j++] = (byte) (_Si[t[(i + s3) % BC] & 0xFF] ^ tt);
+ }
+ if (_RDEBUG && _debuglevel > 6) {
+ System.out.println("PT=" + toString(result));
+ System.out.println();
+ }
+ if (_RDEBUG) trace(_OUT, "blockDecrypt()");
+ }
+
+ /** A basic symmetric encryption/decryption test for a given key size. */
+ private static boolean self_test(int keysize) {
+ if (_RDEBUG) trace(_IN, "self_test(" + keysize + ")");
+ boolean ok = false;
+ try {
+ byte[] kb = new byte[keysize];
+ byte[] pt = new byte[_BLOCK_SIZE];
+ int i;
+
+ for (i = 0; i < keysize; i++)
+ kb[i] = (byte) i;
+ for (i = 0; i < _BLOCK_SIZE; i++)
+ pt[i] = (byte) i;
+
+ if (_RDEBUG && _debuglevel > 6) {
+ System.out.println("==========");
+ System.out.println();
+ System.out.println("KEYSIZE=" + (8 * keysize));
+ System.out.println("KEY=" + toString(kb));
+ System.out.println();
+ }
+ Object key = makeKey(kb, _BLOCK_SIZE);
+
+ if (_RDEBUG && _debuglevel > 6) {
+ System.out.println("Intermediate Ciphertext Values (Encryption)");
+ System.out.println();
+ System.out.println("PT=" + toString(pt));
+ }
+ byte[] ct = new byte[_BLOCK_SIZE];
+ blockEncrypt(pt, ct, 0, 0, key, _BLOCK_SIZE);
+
+ if (_RDEBUG && _debuglevel > 6) {
+ System.out.println("Intermediate Plaintext Values (Decryption)");
+ System.out.println();
+ System.out.println("CT=" + toString(ct));
+ }
+ byte[] cpt = new byte[_BLOCK_SIZE];
+ blockDecrypt(ct, cpt, 0, 0, key, _BLOCK_SIZE);
+
+ ok = areEqual(pt, cpt);
+ if (!ok) throw new RuntimeException("Symmetric operation failed");
+ } catch (Exception x) {
+ if (_RDEBUG && _debuglevel > 0) {
+ debug("Exception encountered during self-test: " + x.getMessage());
+ x.printStackTrace();
+ }
+ }
+ if (_RDEBUG && _debuglevel > 0) debug("Self-test OK? " + ok);
+ if (_RDEBUG) trace(_OUT, "self_test()");
+ return ok;
+ }
+
+ /**
+ * Return The number of rounds for a given Rijndael's key and block sizes.
+ *
+ * @param keySize The size of the user key material in bytes.
+ * @param blockSize The desired block size in bytes.
+ * @return The number of rounds for a given Rijndael's key and
+ * block sizes.
+ */
+ public static final int getRounds(int keySize, int blockSize) {
+ switch (keySize) {
+ case 16:
+ return blockSize == 16 ? 10 : (blockSize == 24 ? 12 : 14);
+ case 24:
+ return blockSize != 32 ? 12 : 14;
+ default:
+ // 32 bytes = 256 bits
+ return 14;
+ }
+ }
+
+ // utility static methods (from cryptix.util.core ArrayUtil and Hex classes)
+ //...........................................................................
+
+ /**
+ * Compares two byte arrays for equality.
+ *
+ * @return true if the arrays have identical contents
+ */
+ private static final boolean areEqual(byte[] a, byte[] b) {
+ int aLength = a.length;
+ if (aLength != b.length) return false;
+ for (int i = 0; i < aLength; i++)
+ if (a[i] != b[i]) return false;
+ return true;
+ }
+
+ /**
+ * Returns a string of 2 hexadecimal digits (most significant
+ * digit first) corresponding to the lowest 8 bits of n.
+ */
+ private static final String byteToString(int n) {
+ char[] buf = { _HEX_DIGITS[(n >>> 4) & 0x0F], _HEX_DIGITS[n & 0x0F]};
+ return new String(buf);
+ }
+
+ /**
+ * Returns a string of 8 hexadecimal digits (most significant
+ * digit first) corresponding to the integer n, which is
+ * treated as unsigned.
+ */
+ private static final String intToString(int n) {
+ char[] buf = new char[8];
+ for (int i = 7; i >= 0; i--) {
+ buf[i] = _HEX_DIGITS[n & 0x0F];
+ n >>>= 4;
+ }
+ return new String(buf);
+ }
+
+ /**
+ * Returns a string of hexadecimal digits from a byte array. Each
+ * byte is converted to 2 hex symbols.
+ */
+ private static final String toString(byte[] ba) {
+ int length = ba.length;
+ char[] buf = new char[length * 2];
+ for (int i = 0, j = 0, k; i < length;) {
+ k = ba[i++];
+ buf[j++] = _HEX_DIGITS[(k >>> 4) & 0x0F];
+ buf[j++] = _HEX_DIGITS[k & 0x0F];
+ }
+ return new String(buf);
+ }
+
+ /**
+ * Returns a string of hexadecimal digits from an integer array. Each
+ * int is converted to 4 hex symbols.
+ */
+ private static final String toString(int[] ia) {
+ int length = ia.length;
+ char[] buf = new char[length * 8];
+ for (int i = 0, j = 0, k; i < length; i++) {
+ k = ia[i];
+ buf[j++] = _HEX_DIGITS[(k >>> 28) & 0x0F];
+ buf[j++] = _HEX_DIGITS[(k >>> 24) & 0x0F];
+ buf[j++] = _HEX_DIGITS[(k >>> 20) & 0x0F];
+ buf[j++] = _HEX_DIGITS[(k >>> 16) & 0x0F];
+ buf[j++] = _HEX_DIGITS[(k >>> 12) & 0x0F];
+ buf[j++] = _HEX_DIGITS[(k >>> 8) & 0x0F];
+ buf[j++] = _HEX_DIGITS[(k >>> 4) & 0x0F];
+ buf[j++] = _HEX_DIGITS[k & 0x0F];
+ }
+ return new String(buf);
+ }
+
+ // main(): use to generate the Intermediate Values KAT
+ //...........................................................................
+
+ public static void main(String[] args) {
+ self_test(16);
+ self_test(24);
+ self_test(32);
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/crypto/CryptoConstants.java b/src/net/i2p/crypto/CryptoConstants.java
new file mode 100644
index 0000000..9650390
--- /dev/null
+++ b/src/net/i2p/crypto/CryptoConstants.java
@@ -0,0 +1,66 @@
+package net.i2p.crypto;
+
+/*
+ * Copyright (c) 2003, TheCrypto
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * - Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ * - Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * - Neither the name of the TheCrypto may be used to endorse or promote
+ * products derived from this software without specific prior written
+ * permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+import java.math.BigInteger;
+
+import net.i2p.util.NativeBigInteger;
+
+/**
+ * Primes for ElGamal and DSA from
+ * http://www.ietf.org/proceedings/03mar/I-D/draft-ietf-ipsec-ike-modp-groups-05.txt
+ */
+public class CryptoConstants {
+ public static final BigInteger dsap = new NativeBigInteger(
+ "9c05b2aa960d9b97b8931963c9cc9e8c3026e9b8ed92fad0a69cc886d5bf8015fcadae31"
+ + "a0ad18fab3f01b00a358de237655c4964afaa2b337e96ad316b9fb1cc564b5aec5b69a9f"
+ + "f6c3e4548707fef8503d91dd8602e867e6d35d2235c1869ce2479c3b9d5401de04e0727f"
+ + "b33d6511285d4cf29538d9e3b6051f5b22cc1c93",
+ 16);
+ public static final BigInteger dsaq = new NativeBigInteger("a5dfc28fef4ca1e286744cd8eed9d29d684046b7", 16);
+ public static final BigInteger dsag = new NativeBigInteger(
+ "c1f4d27d40093b429e962d7223824e0bbc47e7c832a39236fc683af84889581075ff9082"
+ + "ed32353d4374d7301cda1d23c431f4698599dda02451824ff369752593647cc3ddc197de"
+ + "985e43d136cdcfc6bd5409cd2f450821142a5e6f8eb1c3ab5d0484b8129fcf17bce4f7f3"
+ + "3321c3cb3dbb14a905e7b2b3e93be4708cbcc82",
+ 16);
+ public static final BigInteger elgp = new NativeBigInteger("FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD1"
+ + "29024E088A67CC74020BBEA63B139B22514A08798E3404DD"
+ + "EF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245"
+ + "E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7ED"
+ + "EE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3D"
+ + "C2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F"
+ + "83655D23DCA3AD961C62F356208552BB9ED529077096966D"
+ + "670C354E4ABC9804F1746C08CA18217C32905E462E36CE3B"
+ + "E39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9"
+ + "DE2BCBF6955817183995497CEA956AE515D2261898FA0510"
+ + "15728E5A8AACAA68FFFFFFFFFFFFFFFF", 16);
+ public static final BigInteger elgg = new NativeBigInteger("2");
+}
\ No newline at end of file
diff --git a/src/net/i2p/crypto/DHSessionKeyBuilder.java b/src/net/i2p/crypto/DHSessionKeyBuilder.java
new file mode 100644
index 0000000..949d3a7
--- /dev/null
+++ b/src/net/i2p/crypto/DHSessionKeyBuilder.java
@@ -0,0 +1,539 @@
+package net.i2p.crypto;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.math.BigInteger;
+import java.util.ArrayList;
+import java.util.List;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+
+import net.i2p.I2PAppContext;
+import net.i2p.I2PException;
+import net.i2p.data.ByteArray;
+import net.i2p.data.DataHelper;
+import net.i2p.data.SessionKey;
+import net.i2p.util.Clock;
+import net.i2p.util.I2PThread;
+import net.i2p.util.Log;
+import net.i2p.util.NativeBigInteger;
+import net.i2p.util.RandomSource;
+
+/**
+ * Generate a new session key through a diffie hellman exchange. This uses the
+ * constants defined in CryptoConstants, which causes the exchange to create a
+ * 256 bit session key.
+ *
+ * This class precalcs a set of values on its own thread, using those transparently
+ * when a new instance is created. By default, the minimum threshold for creating
+ * new values for the pool is 5, and the max pool size is 10. Whenever the pool has
+ * less than the minimum, it fills it up again to the max. There is a delay after
+ * each precalculation so that the CPU isn't hosed during startup (defaulting to 1 second).
+ * These three parameters are controlled by java environmental variables and
+ * can be adjusted via:
+ * -Dcrypto.dh.precalc.min=40 -Dcrypto.dh.precalc.max=100 -Dcrypto.dh.precalc.delay=60000
+ *
+ * (delay is milliseconds)
+ *
+ * To disable precalculation, set min to 0
+ *
+ * @author jrandom
+ */
+public class DHSessionKeyBuilder {
+ private static I2PAppContext _context = I2PAppContext.getGlobalContext();
+ private final static Log _log = new Log(DHSessionKeyBuilder.class);
+ private static int MIN_NUM_BUILDERS = -1;
+ private static int MAX_NUM_BUILDERS = -1;
+ private static int CALC_DELAY = -1;
+ private static volatile List _builders = new ArrayList(50);
+ private static Thread _precalcThread = null;
+ private BigInteger _myPrivateValue;
+ private BigInteger _myPublicValue;
+ private BigInteger _peerValue;
+ private SessionKey _sessionKey;
+ private ByteArray _extraExchangedBytes; // bytes after the session key from the DH exchange
+
+ public final static String PROP_DH_PRECALC_MIN = "crypto.dh.precalc.min";
+ public final static String PROP_DH_PRECALC_MAX = "crypto.dh.precalc.max";
+ public final static String PROP_DH_PRECALC_DELAY = "crypto.dh.precalc.delay";
+ public final static String DEFAULT_DH_PRECALC_MIN = "5";
+ public final static String DEFAULT_DH_PRECALC_MAX = "50";
+ public final static String DEFAULT_DH_PRECALC_DELAY = "10000";
+
+ static {
+ I2PAppContext ctx = _context;
+ ctx.statManager().createRateStat("crypto.dhGeneratePublicTime", "How long it takes to create x and X", "Encryption", new long[] { 60*1000, 5*60*1000, 60*60*1000 });
+ ctx.statManager().createRateStat("crypto.dhCalculateSessionTime", "How long it takes to create the session key", "Encryption", new long[] { 60*1000, 5*60*1000, 60*60*1000 });
+ try {
+ int val = Integer.parseInt(ctx.getProperty(PROP_DH_PRECALC_MIN, DEFAULT_DH_PRECALC_MIN));
+ MIN_NUM_BUILDERS = val;
+ } catch (Throwable t) {
+ int val = Integer.parseInt(DEFAULT_DH_PRECALC_MIN);
+ MIN_NUM_BUILDERS = val;
+ }
+ try {
+ int val = Integer.parseInt(ctx.getProperty(PROP_DH_PRECALC_MAX, DEFAULT_DH_PRECALC_MAX));
+ MAX_NUM_BUILDERS = val;
+ } catch (Throwable t) {
+ int val = Integer.parseInt(DEFAULT_DH_PRECALC_MAX);
+ MAX_NUM_BUILDERS = val;
+ }
+ try {
+ int val = Integer.parseInt(ctx.getProperty(PROP_DH_PRECALC_DELAY, DEFAULT_DH_PRECALC_DELAY));
+ CALC_DELAY = val;
+ } catch (Throwable t) {
+ int val = Integer.parseInt(DEFAULT_DH_PRECALC_DELAY);
+ CALC_DELAY = val;
+ }
+
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("DH Precalc (minimum: " + MIN_NUM_BUILDERS + " max: " + MAX_NUM_BUILDERS + ", delay: "
+ + CALC_DELAY + ")");
+
+ _precalcThread = new I2PThread(new DHSessionKeyBuilderPrecalcRunner(MIN_NUM_BUILDERS, MAX_NUM_BUILDERS));
+ _precalcThread.setName("DH Precalc");
+ _precalcThread.setDaemon(true);
+ _precalcThread.setPriority(Thread.MIN_PRIORITY);
+ _precalcThread.start();
+ }
+
+ /**
+ * Construct a new DH key builder
+ *
+ */
+ public DHSessionKeyBuilder() {
+ this(false);
+ DHSessionKeyBuilder builder = null;
+ synchronized (_builders) {
+ if (_builders.size() > 0) {
+ builder = (DHSessionKeyBuilder) _builders.remove(0);
+ if (_log.shouldLog(Log.DEBUG)) _log.debug("Removing a builder. # left = " + _builders.size());
+ } else {
+ if (_log.shouldLog(Log.WARN)) _log.warn("NO MORE BUILDERS! creating one now");
+ }
+ }
+ if (builder != null) {
+ _myPrivateValue = builder._myPrivateValue;
+ _myPublicValue = builder._myPublicValue;
+ _peerValue = builder._peerValue;
+ _sessionKey = builder._sessionKey;
+ _extraExchangedBytes = builder._extraExchangedBytes;
+ } else {
+ _myPrivateValue = null;
+ _myPublicValue = null;
+ _peerValue = null;
+ _sessionKey = null;
+ _myPublicValue = generateMyValue();
+ _extraExchangedBytes = new ByteArray();
+ }
+ }
+
+ public DHSessionKeyBuilder(boolean usePool) {
+ _myPrivateValue = null;
+ _myPublicValue = null;
+ _peerValue = null;
+ _sessionKey = null;
+ _extraExchangedBytes = new ByteArray();
+ }
+
+ /**
+ * Conduct a DH exchange over the streams, returning the resulting data.
+ *
+ * @return exchanged data
+ * @throws IOException if there is an error (but does not close the streams
+ */
+ public static DHSessionKeyBuilder exchangeKeys(InputStream in, OutputStream out) throws IOException {
+ DHSessionKeyBuilder builder = new DHSessionKeyBuilder();
+
+ // send: X
+ writeBigI(out, builder.getMyPublicValue());
+
+ // read: Y
+ BigInteger Y = readBigI(in);
+ if (Y == null) return null;
+ try {
+ builder.setPeerPublicValue(Y);
+ return builder;
+ } catch (InvalidPublicParameterException ippe) {
+ if (_log.shouldLog(Log.ERROR))
+ _log.error("Key exchange failed (hostile peer?)", ippe);
+ return null;
+ }
+ }
+
+ static BigInteger readBigI(InputStream in) throws IOException {
+ byte Y[] = new byte[256];
+ int read = DataHelper.read(in, Y);
+ if (read != 256) {
+ return null;
+ }
+ if (1 == (Y[0] & 0x80)) {
+ // high bit set, need to inject an additional byte to keep 2s complement
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("High bit set");
+ byte Y2[] = new byte[257];
+ System.arraycopy(Y, 0, Y2, 1, 256);
+ Y = Y2;
+ }
+ return new NativeBigInteger(1, Y);
+ }
+
+ /**
+ * Write out the integer as a 256 byte value. This left pads with 0s so
+ * to keep in 2s complement, and if it is already 257 bytes (due to
+ * the sign bit) ignore that first byte.
+ */
+ static void writeBigI(OutputStream out, BigInteger val) throws IOException {
+ byte x[] = val.toByteArray();
+ for (int i = x.length; i < 256; i++)
+ out.write(0);
+ if (x.length == 257)
+ out.write(x, 1, 256);
+ else if (x.length == 256)
+ out.write(x);
+ else if (x.length > 257)
+ throw new IllegalArgumentException("Value is too large! length="+x.length);
+
+ out.flush();
+ }
+
+ private static final int getSize() {
+ synchronized (_builders) {
+ return _builders.size();
+ }
+ }
+
+ private static final int addBuilder(DHSessionKeyBuilder builder) {
+ int sz = 0;
+ synchronized (_builders) {
+ _builders.add(builder);
+ sz = _builders.size();
+ }
+ return sz;
+ }
+
+ /**
+ * Create a new private value for the DH exchange, and return the number to
+ * be exchanged, leaving the actual private value accessible through getMyPrivateValue()
+ *
+ */
+ public BigInteger generateMyValue() {
+ long start = System.currentTimeMillis();
+ _myPrivateValue = new NativeBigInteger(KeyGenerator.PUBKEY_EXPONENT_SIZE, RandomSource.getInstance());
+ BigInteger myValue = CryptoConstants.elgg.modPow(_myPrivateValue, CryptoConstants.elgp);
+ long end = System.currentTimeMillis();
+ long diff = end - start;
+ _context.statManager().addRateData("crypto.dhGeneratePublicTime", diff, diff);
+ if (diff > 1000) {
+ if (_log.shouldLog(Log.WARN))
+ _log.warn("Took more than a second (" + diff + "ms) to generate local DH value");
+ } else {
+ if (_log.shouldLog(Log.DEBUG)) _log.debug("Took " + diff + "ms to generate local DH value");
+ }
+ return myValue;
+ }
+
+ /**
+ * Retrieve the private value used by the local participant in the DH exchange
+ */
+ public BigInteger getMyPrivateValue() {
+ return _myPrivateValue;
+ }
+
+ /**
+ * Retrieve the public value used by the local participant in the DH exchange,
+ * generating it if necessary
+ */
+ public BigInteger getMyPublicValue() {
+ if (_myPublicValue == null) _myPublicValue = generateMyValue();
+ return _myPublicValue;
+ }
+ /**
+ * Return a 256 byte representation of our public key, with leading 0s
+ * if necessary.
+ *
+ */
+ public byte[] getMyPublicValueBytes() {
+ return toByteArray(getMyPublicValue());
+ }
+
+ private static final byte[] toByteArray(BigInteger bi) {
+ byte data[] = bi.toByteArray();
+ byte rv[] = new byte[256];
+ if (data.length == 257) // high byte has the sign bit
+ System.arraycopy(data, 1, rv, 0, rv.length);
+ else if (data.length == 256)
+ System.arraycopy(data, 0, rv, 0, rv.length);
+ else
+ System.arraycopy(data, 0, rv, rv.length-data.length, data.length);
+ return rv;
+ }
+
+ /**
+ * Specify the value given by the peer for use in the session key negotiation
+ *
+ */
+ public void setPeerPublicValue(BigInteger peerVal) throws InvalidPublicParameterException {
+ validatePublic(peerVal);
+ _peerValue = peerVal;
+ }
+ public void setPeerPublicValue(byte val[]) throws InvalidPublicParameterException {
+ if (val.length != 256)
+ throw new IllegalArgumentException("Peer public value must be exactly 256 bytes");
+
+ if (1 == (val[0] & 0x80)) {
+ // high bit set, need to inject an additional byte to keep 2s complement
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("High bit set");
+ byte val2[] = new byte[257];
+ System.arraycopy(val, 0, val2, 1, 256);
+ val = val2;
+ }
+ setPeerPublicValue(new NativeBigInteger(1, val));
+ //_peerValue = new NativeBigInteger(val);
+ }
+
+ public BigInteger getPeerPublicValue() {
+ return _peerValue;
+ }
+ public byte[] getPeerPublicValueBytes() {
+ return toByteArray(getPeerPublicValue());
+ }
+
+ /**
+ * Retrieve the session key, calculating it if necessary (and if possible).
+ *
+ * @return session key exchanged, or null if the exchange is not complete
+ */
+ public SessionKey getSessionKey() {
+ if (_sessionKey != null) return _sessionKey;
+ if (_peerValue != null) {
+ if (_myPrivateValue == null) generateMyValue();
+ _sessionKey = calculateSessionKey(_myPrivateValue, _peerValue);
+ } else {
+ //System.err.println("Not ready yet.. privateValue and peerValue must be set ("
+ // + (_myPrivateValue != null ? "set" : "null") + ","
+ // + (_peerValue != null ? "set" : "null") + ")");
+ }
+ return _sessionKey;
+ }
+
+ /**
+ * Retrieve the extra bytes beyond the session key resulting from the DH exchange.
+ * If there aren't enough bytes (with all of them being consumed by the 32 byte key),
+ * the SHA256 of the key itself is used.
+ *
+ */
+ public ByteArray getExtraBytes() {
+ return _extraExchangedBytes;
+ }
+
+ /**
+ * Calculate a session key based on the private value and the public peer value
+ *
+ */
+ private final SessionKey calculateSessionKey(BigInteger myPrivateValue, BigInteger publicPeerValue) {
+ long start = System.currentTimeMillis();
+ SessionKey key = new SessionKey();
+ BigInteger exchangedKey = publicPeerValue.modPow(myPrivateValue, CryptoConstants.elgp);
+ byte buf[] = exchangedKey.toByteArray();
+ byte val[] = new byte[32];
+ if (buf.length < val.length) {
+ System.arraycopy(buf, 0, val, 0, buf.length);
+ byte remaining[] = SHA256Generator.getInstance().calculateHash(val).getData();
+ _extraExchangedBytes.setData(remaining);
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Storing " + remaining.length + " bytes from the DH exchange by SHA256 the session key");
+ } else { // (buf.length >= val.length)
+ System.arraycopy(buf, 0, val, 0, val.length);
+ // feed the extra bytes into the PRNG
+ _context.random().harvester().feedEntropy("DH", buf, val.length, buf.length-val.length);
+ byte remaining[] = new byte[buf.length - val.length];
+ System.arraycopy(buf, val.length, remaining, 0, remaining.length);
+ _extraExchangedBytes.setData(remaining);
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Storing " + remaining.length + " bytes from the end of the DH exchange");
+ }
+ key.setData(val);
+ long end = System.currentTimeMillis();
+ long diff = end - start;
+
+ _context.statManager().addRateData("crypto.dhCalculateSessionTime", diff, diff);
+ if (diff > 1000) {
+ if (_log.shouldLog(Log.WARN)) _log.warn("Generating session key took too long (" + diff + " ms");
+ } else {
+ if (_log.shouldLog(Log.DEBUG)) _log.debug("Generating session key " + diff + " ms");
+ }
+ return key;
+ }
+
+ /**
+ * rfc2631:
+ * The following algorithm MAY be used to validate a received public key y.
+ *
+ * 1. Verify that y lies within the interval [2,p-1]. If it does not,
+ * the key is invalid.
+ * 2. Compute y^q mod p. If the result == 1, the key is valid.
+ * Otherwise the key is invalid.
+ */
+ private static final void validatePublic(BigInteger publicValue) throws InvalidPublicParameterException {
+ int cmp = publicValue.compareTo(NativeBigInteger.ONE);
+ if (cmp <= 0)
+ throw new InvalidPublicParameterException("Public value is below two: " + publicValue.toString());
+
+ cmp = publicValue.compareTo(CryptoConstants.elgp);
+ if (cmp >= 0)
+ throw new InvalidPublicParameterException("Public value is above p-1: " + publicValue.toString());
+
+ // todo:
+ // whatever validation needs to be done to mirror the rfc's part 2 (we don't have a q, so can't do
+ // if (NativeBigInteger.ONE.compareTo(publicValue.modPow(q, CryptoConstants.elgp)) != 0)
+ // throw new InvalidPublicParameterException("Invalid public value with y^q mod p != 1");
+ //
+ }
+
+ /*
+ private static void testValidation() {
+ NativeBigInteger bi = new NativeBigInteger("-3416069082912684797963255430346582466254460710249795973742848334283491150671563023437888953432878859472362439146158925287289114133666004165938814597775594104058593692562989626922979416277152479694258099203456493995467386903611666213773085025718340335205240293383622352894862685806192183268523899615405287022135356656720938278415659792084974076416864813957028335830794117802560169423133816961503981757298122040391506600117301607823659479051969827845787626261515313227076880722069706394405554113103165334903531980102626092646197079218895216346725765704256096661045699444128316078549709132753443706200863682650825635513");
+ try {
+ validatePublic(bi);
+ System.err.println("valid?!");
+ } catch (InvalidPublicParameterException ippe) {
+ System.err.println("Ok, invalid. cool");
+ }
+
+ byte val[] = bi.toByteArray();
+ System.out.println("Len: " + val.length + " first is ok? " + ( (val[0] & 0x80) == 1)
+ + "\n" + DataHelper.toString(val, 64));
+ NativeBigInteger bi2 = new NativeBigInteger(1, val);
+ try {
+ validatePublic(bi2);
+ System.out.println("valid");
+ } catch (InvalidPublicParameterException ippe) {
+ System.out.println("invalid");
+ }
+ }
+ */
+
+ public static void main(String args[]) {
+ //if (true) { testValidation(); return; }
+
+ RandomSource.getInstance().nextBoolean(); // warm it up
+ try {
+ Thread.sleep(20 * 1000);
+ } catch (InterruptedException ie) { // nop
+ }
+ I2PAppContext ctx = new I2PAppContext();
+ _log.debug("\n\n\n\nBegin test\n");
+ long negTime = 0;
+ try {
+ for (int i = 0; i < 5; i++) {
+ long startNeg = Clock.getInstance().now();
+ DHSessionKeyBuilder builder1 = new DHSessionKeyBuilder();
+ DHSessionKeyBuilder builder2 = new DHSessionKeyBuilder();
+ BigInteger pub1 = builder1.getMyPublicValue();
+ builder2.setPeerPublicValue(pub1);
+ BigInteger pub2 = builder2.getMyPublicValue();
+ builder1.setPeerPublicValue(pub2);
+ SessionKey key1 = builder1.getSessionKey();
+ SessionKey key2 = builder2.getSessionKey();
+ long endNeg = Clock.getInstance().now();
+ negTime += endNeg - startNeg;
+
+ if (!key1.equals(key2))
+ _log.error("**ERROR: Keys do not match");
+ else
+ _log.debug("**Success: Keys match");
+
+ byte iv[] = new byte[16];
+ RandomSource.getInstance().nextBytes(iv);
+ String origVal = "1234567890123456"; // 16 bytes max using AESEngine
+ byte enc[] = new byte[16];
+ byte dec[] = new byte[16];
+ ctx.aes().encrypt(origVal.getBytes(), 0, enc, 0, key1, iv, 16);
+ ctx.aes().decrypt(enc, 0, dec, 0, key2, iv, 16);
+ String tranVal = new String(dec);
+ if (origVal.equals(tranVal))
+ _log.debug("**Success: D(E(val)) == val");
+ else
+ _log.error("**ERROR: D(E(val)) != val [val=(" + tranVal + "), origVal=(" + origVal + ")");
+ }
+ } catch (InvalidPublicParameterException ippe) {
+ _log.error("Invalid dh", ippe);
+ }
+ _log.debug("Negotiation time for 5 runs: " + negTime + " @ " + negTime / 5l + "ms each");
+ try {
+ Thread.sleep(2000);
+ } catch (InterruptedException ie) { // nop
+ }
+ }
+
+ private static class DHSessionKeyBuilderPrecalcRunner implements Runnable {
+ private int _minSize;
+ private int _maxSize;
+
+ private DHSessionKeyBuilderPrecalcRunner(int minSize, int maxSize) {
+ _minSize = minSize;
+ _maxSize = maxSize;
+ }
+
+ public void run() {
+ while (true) {
+
+ int curSize = 0;
+ long start = Clock.getInstance().now();
+ int startSize = getSize();
+ curSize = startSize;
+ while (curSize < _minSize) {
+ while (curSize < _maxSize) {
+ long curStart = System.currentTimeMillis();
+ curSize = addBuilder(precalc(curSize));
+ long curCalc = System.currentTimeMillis() - curStart;
+ // for some relief...
+ try {
+ Thread.sleep(CALC_DELAY + curCalc * 10);
+ } catch (InterruptedException ie) { // nop
+ }
+ }
+ }
+ long end = Clock.getInstance().now();
+ int numCalc = curSize - startSize;
+ if (numCalc > 0) {
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Precalced " + numCalc + " to " + curSize + " in "
+ + (end - start - CALC_DELAY * numCalc) + "ms (not counting "
+ + (CALC_DELAY * numCalc) + "ms relief). now sleeping");
+ }
+ try {
+ Thread.sleep(30 * 1000);
+ } catch (InterruptedException ie) { // nop
+ }
+ }
+ }
+
+ private DHSessionKeyBuilder precalc(int i) {
+ DHSessionKeyBuilder builder = new DHSessionKeyBuilder(false);
+ builder.getMyPublicValue();
+ //_log.debug("Precalc " + i + " complete");
+ return builder;
+ }
+ }
+
+ public static class InvalidPublicParameterException extends I2PException {
+ public InvalidPublicParameterException() {
+ super();
+ }
+ public InvalidPublicParameterException(String msg) {
+ super(msg);
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/crypto/DSAEngine.java b/src/net/i2p/crypto/DSAEngine.java
new file mode 100644
index 0000000..113627c
--- /dev/null
+++ b/src/net/i2p/crypto/DSAEngine.java
@@ -0,0 +1,228 @@
+package net.i2p.crypto;
+
+/*
+ * Copyright (c) 2003, TheCrypto
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * - Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ * - Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * - Neither the name of the TheCrypto may be used to endorse or promote
+ * products derived from this software without specific prior written
+ * permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+import java.io.InputStream;
+import java.io.IOException;
+import java.math.BigInteger;
+import java.util.Arrays;
+
+import net.i2p.I2PAppContext;
+import net.i2p.data.DataHelper;
+import net.i2p.data.Hash;
+import net.i2p.data.Signature;
+import net.i2p.data.SigningPrivateKey;
+import net.i2p.data.SigningPublicKey;
+import net.i2p.util.Log;
+import net.i2p.util.NativeBigInteger;
+
+public class DSAEngine {
+ private Log _log;
+ private I2PAppContext _context;
+
+ public DSAEngine(I2PAppContext context) {
+ _log = context.logManager().getLog(DSAEngine.class);
+ _context = context;
+ }
+ public static DSAEngine getInstance() {
+ return I2PAppContext.getGlobalContext().dsa();
+ }
+ public boolean verifySignature(Signature signature, byte signedData[], SigningPublicKey verifyingKey) {
+ return verifySignature(signature, signedData, 0, signedData.length, verifyingKey);
+ }
+ public boolean verifySignature(Signature signature, byte signedData[], int offset, int size, SigningPublicKey verifyingKey) {
+ return verifySignature(signature, calculateHash(signedData, offset, size), verifyingKey);
+ }
+ public boolean verifySignature(Signature signature, InputStream in, SigningPublicKey verifyingKey) {
+ return verifySignature(signature, calculateHash(in), verifyingKey);
+ }
+ public boolean verifySignature(Signature signature, Hash hash, SigningPublicKey verifyingKey) {
+ long start = _context.clock().now();
+
+ try {
+ byte[] sigbytes = signature.getData();
+ byte rbytes[] = new byte[20];
+ byte sbytes[] = new byte[20];
+ for (int x = 0; x < 40; x++) {
+ if (x < 20) {
+ rbytes[x] = sigbytes[x];
+ } else {
+ sbytes[x - 20] = sigbytes[x];
+ }
+ }
+ BigInteger s = new NativeBigInteger(1, sbytes);
+ BigInteger r = new NativeBigInteger(1, rbytes);
+ BigInteger y = new NativeBigInteger(1, verifyingKey.getData());
+ BigInteger w = null;
+ try {
+ w = s.modInverse(CryptoConstants.dsaq);
+ } catch (ArithmeticException ae) {
+ return false;
+ }
+ byte data[] = hash.getData();
+ NativeBigInteger bi = new NativeBigInteger(1, data);
+ BigInteger u1 = bi.multiply(w).mod(CryptoConstants.dsaq);
+ BigInteger u2 = r.multiply(w).mod(CryptoConstants.dsaq);
+ BigInteger modval = CryptoConstants.dsag.modPow(u1, CryptoConstants.dsap);
+ BigInteger modmulval = modval.multiply(y.modPow(u2,CryptoConstants.dsap));
+ BigInteger v = (modmulval).mod(CryptoConstants.dsap).mod(CryptoConstants.dsaq);
+
+ boolean ok = v.compareTo(r) == 0;
+
+ long diff = _context.clock().now() - start;
+ if (diff > 1000) {
+ if (_log.shouldLog(Log.WARN))
+ _log.warn("Took too long to verify the signature (" + diff + "ms)");
+ }
+ return ok;
+ } catch (Exception e) {
+ _log.log(Log.CRIT, "Error verifying the signature", e);
+ return false;
+ }
+ }
+
+ public Signature sign(byte data[], SigningPrivateKey signingKey) {
+ return sign(data, 0, data.length, signingKey);
+ }
+ public Signature sign(byte data[], int offset, int length, SigningPrivateKey signingKey) {
+ if ((signingKey == null) || (data == null) || (data.length <= 0)) return null;
+ Hash h = calculateHash(data, offset, length);
+ return sign(h, signingKey);
+ }
+
+ public Signature sign(InputStream in, SigningPrivateKey signingKey) {
+ if ((signingKey == null) || (in == null) ) return null;
+ Hash h = calculateHash(in);
+ return sign(h, signingKey);
+ }
+
+ public Signature sign(Hash hash, SigningPrivateKey signingKey) {
+ if ((signingKey == null) || (hash == null)) return null;
+ long start = _context.clock().now();
+
+ Signature sig = new Signature();
+ BigInteger k;
+
+ boolean ok = false;
+ do {
+ k = new BigInteger(160, _context.random());
+ ok = k.compareTo(CryptoConstants.dsaq) != 1;
+ ok = ok && !k.equals(BigInteger.ZERO);
+ //System.out.println("K picked (ok? " + ok + "): " + k.bitLength() + ": " + k.toString());
+ } while (!ok);
+
+ BigInteger r = CryptoConstants.dsag.modPow(k, CryptoConstants.dsap).mod(CryptoConstants.dsaq);
+ BigInteger kinv = k.modInverse(CryptoConstants.dsaq);
+
+ BigInteger M = new NativeBigInteger(1, hash.getData());
+ BigInteger x = new NativeBigInteger(1, signingKey.getData());
+ BigInteger s = (kinv.multiply(M.add(x.multiply(r)))).mod(CryptoConstants.dsaq);
+
+ byte[] rbytes = r.toByteArray();
+ byte[] sbytes = s.toByteArray();
+ byte[] out = new byte[40];
+
+ // (q^random)%p is computationally random
+ _context.random().harvester().feedEntropy("DSA.sign", rbytes, 0, rbytes.length);
+
+ if (rbytes.length == 20) {
+ for (int i = 0; i < 20; i++) {
+ out[i] = rbytes[i];
+ }
+ } else if (rbytes.length == 21) {
+ for (int i = 0; i < 20; i++) {
+ out[i] = rbytes[i + 1];
+ }
+ } else {
+ if (_log.shouldLog(Log.DEBUG)) _log.debug("Using short rbytes.length [" + rbytes.length + "]");
+ for (int i = 0; i < rbytes.length; i++)
+ out[i + 20 - rbytes.length] = rbytes[i];
+ }
+ if (sbytes.length == 20) {
+ for (int i = 0; i < 20; i++) {
+ out[i + 20] = sbytes[i];
+ }
+ } else if (sbytes.length == 21) {
+ for (int i = 0; i < 20; i++) {
+ out[i + 20] = sbytes[i + 1];
+ }
+ } else {
+ if (_log.shouldLog(Log.DEBUG)) _log.debug("Using short sbytes.length [" + sbytes.length + "]");
+ for (int i = 0; i < sbytes.length; i++)
+ out[i + 20 + 20 - sbytes.length] = sbytes[i];
+ }
+ sig.setData(out);
+
+ long diff = _context.clock().now() - start;
+ if (diff > 1000) {
+ if (_log.shouldLog(Log.WARN)) _log.warn("Took too long to sign (" + diff + "ms)");
+ }
+
+ return sig;
+ }
+
+ public Hash calculateHash(InputStream in) {
+ SHA1 digest = new SHA1();
+ byte buf[] = new byte[64];
+ int read = 0;
+ try {
+ while ( (read = in.read(buf)) != -1) {
+ digest.engineUpdate(buf, 0, read);
+ }
+ } catch (IOException ioe) {
+ if (_log.shouldLog(Log.WARN))
+ _log.warn("Unable to hash the stream", ioe);
+ return null;
+ }
+ return new Hash(digest.engineDigest());
+ }
+
+ public static Hash calculateHash(byte[] source, int offset, int len) {
+ SHA1 h = new SHA1();
+ h.engineUpdate(source, offset, len);
+ byte digested[] = h.digest();
+ return new Hash(digested);
+ }
+
+ public static void main(String args[]) {
+ I2PAppContext ctx = I2PAppContext.getGlobalContext();
+ byte data[] = new byte[4096];
+ ctx.random().nextBytes(data);
+ Object keys[] = ctx.keyGenerator().generateSigningKeypair();
+ try {
+ for (int i = 0; i < 10; i++) {
+ Signature sig = ctx.dsa().sign(data, (SigningPrivateKey)keys[1]);
+ boolean ok = ctx.dsa().verifySignature(sig, data, (SigningPublicKey)keys[0]);
+ System.out.println("OK: " + ok);
+ }
+ } catch (Exception e) { e.printStackTrace(); }
+ ctx.random().saveSeed();
+ }
+}
diff --git a/src/net/i2p/crypto/DummyDSAEngine.java b/src/net/i2p/crypto/DummyDSAEngine.java
new file mode 100644
index 0000000..0140b23
--- /dev/null
+++ b/src/net/i2p/crypto/DummyDSAEngine.java
@@ -0,0 +1,26 @@
+package net.i2p.crypto;
+
+import net.i2p.I2PAppContext;
+import net.i2p.data.Signature;
+import net.i2p.data.SigningPrivateKey;
+import net.i2p.data.SigningPublicKey;
+
+/**
+ * Stub that offers no authentication.
+ *
+ */
+public class DummyDSAEngine extends DSAEngine {
+ public DummyDSAEngine(I2PAppContext context) {
+ super(context);
+ }
+
+ public boolean verifySignature(Signature signature, byte signedData[], SigningPublicKey verifyingKey) {
+ return true;
+ }
+
+ public Signature sign(byte data[], SigningPrivateKey signingKey) {
+ Signature sig = new Signature();
+ sig.setData(Signature.FAKE_SIGNATURE);
+ return sig;
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/crypto/DummyElGamalEngine.java b/src/net/i2p/crypto/DummyElGamalEngine.java
new file mode 100644
index 0000000..2b2f5a9
--- /dev/null
+++ b/src/net/i2p/crypto/DummyElGamalEngine.java
@@ -0,0 +1,106 @@
+package net.i2p.crypto;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+
+import net.i2p.I2PAppContext;
+import net.i2p.data.DataHelper;
+import net.i2p.data.Hash;
+import net.i2p.data.PrivateKey;
+import net.i2p.data.PublicKey;
+import net.i2p.util.Log;
+
+/**
+ * Fake ElG E and D, useful for when performance isn't being tested
+ *
+ * @author jrandom
+ */
+public class DummyElGamalEngine extends ElGamalEngine {
+ private Log _log;
+
+ /**
+ * The ElGamal engine should only be constructed and accessed through the
+ * application context. This constructor should only be used by the
+ * appropriate application context itself.
+ *
+ */
+ public DummyElGamalEngine(I2PAppContext context) {
+ super(context);
+ _log = context.logManager().getLog(DummyElGamalEngine.class);
+ _log.log(Log.CRIT, "Dummy ElGamal engine in use! NO DATA SECURITY. Danger Will Robinson, Danger!",
+ new Exception("I really hope you know what you're doing"));
+ }
+ private DummyElGamalEngine() { super(null); }
+
+ /** encrypt the data to the public key
+ * @return encrypted data
+ * @param publicKey public key encrypt to
+ * @param data data to encrypt
+ */
+ public byte[] encrypt(byte data[], PublicKey publicKey) {
+ if ((data == null) || (data.length >= 223))
+ throw new IllegalArgumentException("Data to encrypt must be < 223 bytes at the moment");
+ if (publicKey == null) throw new IllegalArgumentException("Null public key specified");
+ ByteArrayOutputStream baos = new ByteArrayOutputStream(256);
+ try {
+ baos.write(0xFF);
+ Hash hash = SHA256Generator.getInstance().calculateHash(data);
+ hash.writeBytes(baos);
+ baos.write(data);
+ baos.flush();
+ } catch (Exception e) {
+ _log.error("Internal error writing to buffer", e);
+ return null;
+ }
+ byte d2[] = baos.toByteArray();
+ byte[] out = new byte[514];
+ System.arraycopy(d2, 0, out, (d2.length < 257 ? 257 - d2.length : 0), (d2.length > 257 ? 257 : d2.length));
+ return out;
+ }
+
+ /** Decrypt the data
+ * @param encrypted encrypted data
+ * @param privateKey private key to decrypt with
+ * @return unencrypted data
+ */
+ public byte[] decrypt(byte encrypted[], PrivateKey privateKey) {
+ if ((encrypted == null) || (encrypted.length > 514))
+ throw new IllegalArgumentException("Data to decrypt must be <= 514 bytes at the moment");
+ byte val[] = new byte[257];
+ System.arraycopy(encrypted, 0, val, 0, val.length);
+ int i = 0;
+ for (i = 0; i < val.length; i++)
+ if (val[i] != (byte) 0x00) break;
+ ByteArrayInputStream bais = new ByteArrayInputStream(val, i, val.length - i);
+ Hash hash = new Hash();
+ byte rv[] = null;
+ try {
+ bais.read(); // skip first byte
+ hash.readBytes(bais);
+ rv = new byte[val.length - i - 1 - 32];
+ bais.read(rv);
+ } catch (Exception e) {
+ _log.error("Internal error reading value", e);
+ return null;
+ }
+ Hash calcHash = SHA256Generator.getInstance().calculateHash(rv);
+ if (calcHash.equals(hash)) {
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Hash matches: " + DataHelper.toString(hash.getData(), hash.getData().length));
+ return rv;
+ }
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Doesn't match hash [calc=" + calcHash + " sent hash=" + hash + "]\ndata = " + new String(rv),
+ new Exception("Doesn't match"));
+ return null;
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/crypto/DummyPooledRandomSource.java b/src/net/i2p/crypto/DummyPooledRandomSource.java
new file mode 100644
index 0000000..6b47520
--- /dev/null
+++ b/src/net/i2p/crypto/DummyPooledRandomSource.java
@@ -0,0 +1,98 @@
+package net.i2p.crypto;
+
+import java.util.Random;
+import net.i2p.I2PAppContext;
+import net.i2p.util.PooledRandomSource;
+import net.i2p.util.RandomSource;
+import net.i2p.util.Log;
+
+/**
+ *
+ */
+public class DummyPooledRandomSource extends PooledRandomSource {
+ public DummyPooledRandomSource(I2PAppContext context) {
+ super(context);
+ }
+
+ protected void initializePool(I2PAppContext context) {
+ _pool = new RandomSource[POOL_SIZE];
+ for (int i = 0; i < POOL_SIZE; i++) {
+ _pool[i] = new DummyRandomSource(context);
+ _pool[i].nextBoolean();
+ }
+ _nextPool = 0;
+ }
+
+ private class DummyRandomSource extends RandomSource {
+ private Random _prng;
+ public DummyRandomSource(I2PAppContext context) {
+ super(context);
+ // when we replace to have hooks for fortuna (etc), replace with
+ // a factory (or just a factory method)
+ _prng = new Random();
+ }
+
+ /**
+ * According to the java docs (http://java.sun.com/j2se/1.4.1/docs/api/java/util/Random.html#nextInt(int))
+ * nextInt(n) should return a number between 0 and n (including 0 and excluding n). However, their pseudocode,
+ * as well as sun's, kaffe's, and classpath's implementation INCLUDES NEGATIVE VALUES.
+ * WTF. Ok, so we're going to have it return between 0 and n (including 0, excluding n), since
+ * thats what it has been used for.
+ *
+ */
+ public int nextInt(int n) {
+ if (n == 0) return 0;
+ int val = _prng.nextInt(n);
+ if (val < 0) val = 0 - val;
+ if (val >= n) val = val % n;
+ return val;
+ }
+
+ /**
+ * Like the modified nextInt, nextLong(n) returns a random number from 0 through n,
+ * including 0, excluding n.
+ */
+ public long nextLong(long n) {
+ long v = _prng.nextLong();
+ if (v < 0) v = 0 - v;
+ if (v >= n) v = v % n;
+ return v;
+ }
+
+ /**
+ * override as synchronized, for those JVMs that don't always pull via
+ * nextBytes (cough ibm)
+ */
+ public boolean nextBoolean() { return _prng.nextBoolean(); }
+ /**
+ * override as synchronized, for those JVMs that don't always pull via
+ * nextBytes (cough ibm)
+ */
+ public void nextBytes(byte buf[]) { _prng.nextBytes(buf); }
+ /**
+ * override as synchronized, for those JVMs that don't always pull via
+ * nextBytes (cough ibm)
+ */
+ public double nextDouble() { return _prng.nextDouble(); }
+ /**
+ * override as synchronized, for those JVMs that don't always pull via
+ * nextBytes (cough ibm)
+ */
+ public float nextFloat() { return _prng.nextFloat(); }
+ /**
+ * override as synchronized, for those JVMs that don't always pull via
+ * nextBytes (cough ibm)
+ */
+ public double nextGaussian() { return _prng.nextGaussian(); }
+ /**
+ * override as synchronized, for those JVMs that don't always pull via
+ * nextBytes (cough ibm)
+ */
+ public int nextInt() { return _prng.nextInt(); }
+ /**
+ * override as synchronized, for those JVMs that don't always pull via
+ * nextBytes (cough ibm)
+ */
+ public long nextLong() { return _prng.nextLong(); }
+ }
+}
diff --git a/src/net/i2p/crypto/ElGamalAESEngine.java b/src/net/i2p/crypto/ElGamalAESEngine.java
new file mode 100644
index 0000000..31f2f10
--- /dev/null
+++ b/src/net/i2p/crypto/ElGamalAESEngine.java
@@ -0,0 +1,615 @@
+package net.i2p.crypto;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Set;
+
+import net.i2p.I2PAppContext;
+import net.i2p.data.Base64;
+import net.i2p.data.DataFormatException;
+import net.i2p.data.DataHelper;
+import net.i2p.data.Hash;
+import net.i2p.data.PrivateKey;
+import net.i2p.data.PublicKey;
+import net.i2p.data.SessionKey;
+import net.i2p.data.SessionTag;
+import net.i2p.util.Log;
+
+/**
+ * Handles the actual ElGamal+AES encryption and decryption scenarios using the
+ * supplied keys and data.
+ */
+public class ElGamalAESEngine {
+ private final static Log _log = new Log(ElGamalAESEngine.class);
+ private final static int MIN_ENCRYPTED_SIZE = 80; // smallest possible resulting size
+ private I2PAppContext _context;
+
+ private ElGamalAESEngine() { // nop
+ }
+
+ public ElGamalAESEngine(I2PAppContext ctx) {
+ _context = ctx;
+
+ _context.statManager().createFrequencyStat("crypto.elGamalAES.encryptNewSession",
+ "how frequently we encrypt to a new ElGamal/AES+SessionTag session?",
+ "Encryption", new long[] { 60*1000l, 60*60*1000l, 24*60*60*1000l});
+ _context.statManager().createFrequencyStat("crypto.elGamalAES.encryptExistingSession",
+ "how frequently we encrypt to an existing ElGamal/AES+SessionTag session?",
+ "Encryption", new long[] { 60 * 1000l, 60 * 60 * 1000l, 24 * 60 * 60 * 1000l});
+ _context.statManager().createFrequencyStat("crypto.elGamalAES.decryptNewSession",
+ "how frequently we decrypt with a new ElGamal/AES+SessionTag session?",
+ "Encryption", new long[] { 60 * 1000l, 60 * 60 * 1000l, 24 * 60 * 60 * 1000l});
+ _context.statManager().createFrequencyStat("crypto.elGamalAES.decryptExistingSession",
+ "how frequently we decrypt with an existing ElGamal/AES+SessionTag session?",
+ "Encryption", new long[] { 60 * 1000l, 60 * 60 * 1000l, 24 * 60 * 60 * 1000l});
+ _context.statManager().createFrequencyStat("crypto.elGamalAES.decryptFailed",
+ "how frequently we fail to decrypt with ElGamal/AES+SessionTag?", "Encryption",
+ new long[] { 60 * 60 * 1000l, 24 * 60 * 60 * 1000l});
+ }
+
+ /**
+ * Decrypt the message using the given private key. This works according to the
+ * ElGamal+AES algorithm in the data structure spec.
+ *
+ */
+ public byte[] decrypt(byte data[], PrivateKey targetPrivateKey) throws DataFormatException {
+ if (data == null) {
+ if (_log.shouldLog(Log.ERROR)) _log.error("Null data being decrypted?");
+ return null;
+ } else if (data.length < MIN_ENCRYPTED_SIZE) {
+ if (_log.shouldLog(Log.ERROR))
+ _log.error("Data is less than the minimum size (" + data.length + " < " + MIN_ENCRYPTED_SIZE + ")");
+ return null;
+ }
+
+ byte tag[] = new byte[32];
+ System.arraycopy(data, 0, tag, 0, tag.length);
+ SessionTag st = new SessionTag(tag);
+ SessionKey key = _context.sessionKeyManager().consumeTag(st);
+ SessionKey foundKey = new SessionKey();
+ foundKey.setData(null);
+ SessionKey usedKey = new SessionKey();
+ Set foundTags = new HashSet();
+ byte decrypted[] = null;
+ boolean wasExisting = false;
+ if (key != null) {
+ //if (_log.shouldLog(Log.DEBUG)) _log.debug("Key is known for tag " + st);
+ usedKey.setData(key.getData());
+ long id = _context.random().nextLong();
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug(id + ": Decrypting existing session encrypted with tag: " + st.toString() + ": key: " + key.toBase64() + ": " + data.length + " bytes: " + Base64.encode(data, 0, 64));
+
+ decrypted = decryptExistingSession(data, key, targetPrivateKey, foundTags, usedKey, foundKey);
+ if (decrypted != null) {
+ _context.statManager().updateFrequency("crypto.elGamalAES.decryptExistingSession");
+ if ( (foundTags.size() > 0) && (_log.shouldLog(Log.WARN)) )
+ _log.warn(id + ": ElG/AES decrypt success with " + st + ": found tags: " + foundTags);
+ wasExisting = true;
+ } else {
+ _context.statManager().updateFrequency("crypto.elGamalAES.decryptFailed");
+ if (_log.shouldLog(Log.WARN)) {
+ _log.warn(id + ": ElG decrypt fail: known tag [" + st + "], failed decrypt");
+ }
+ }
+ } else {
+ if (_log.shouldLog(Log.DEBUG)) _log.debug("Key is NOT known for tag " + st);
+ decrypted = decryptNewSession(data, targetPrivateKey, foundTags, usedKey, foundKey);
+ if (decrypted != null) {
+ _context.statManager().updateFrequency("crypto.elGamalAES.decryptNewSession");
+ if ( (foundTags.size() > 0) && (_log.shouldLog(Log.WARN)) )
+ _log.warn("ElG decrypt success: found tags: " + foundTags);
+ } else {
+ _context.statManager().updateFrequency("crypto.elGamalAES.decryptFailed");
+ if (_log.shouldLog(Log.WARN))
+ _log.warn("ElG decrypt fail: unknown tag: " + st);
+ }
+ }
+
+ if ((key == null) && (decrypted == null)) {
+ //_log.debug("Unable to decrypt the data starting with tag [" + st + "] - did the tag expire recently?", new Exception("Decrypt failure"));
+ }
+
+ if (foundTags.size() > 0) {
+ if (foundKey.getData() != null) {
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Found key: " + foundKey.toBase64() + " tags: " + foundTags + " wasExisting? " + wasExisting);
+ _context.sessionKeyManager().tagsReceived(foundKey, foundTags);
+ } else {
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Used key: " + usedKey.toBase64() + " tags: " + foundTags + " wasExisting? " + wasExisting);
+ _context.sessionKeyManager().tagsReceived(usedKey, foundTags);
+ }
+ }
+ return decrypted;
+ }
+
+ /**
+ * scenario 1:
+ * Begin with 222 bytes, ElG encrypted, containing:
+ * - 32 byte SessionKey
+ * - 32 byte pre-IV for the AES
+ * - 158 bytes of random padding
+ * Then encrypt with AES using that session key and the first 16 bytes of the SHA256 of the pre-IV, using
+ * the decryptAESBlock method & structure.
+ *
+ * @param foundTags set which is filled with any sessionTags found during decryption
+ * @param foundKey session key which may be filled with a new sessionKey found during decryption
+ *
+ * @return null if decryption fails
+ */
+ byte[] decryptNewSession(byte data[], PrivateKey targetPrivateKey, Set foundTags, SessionKey usedKey,
+ SessionKey foundKey) throws DataFormatException {
+ if (data == null) {
+ //if (_log.shouldLog(Log.WARN)) _log.warn("Data is null, unable to decrypt new session");
+ return null;
+ } else if (data.length < 514) {
+ //if (_log.shouldLog(Log.WARN)) _log.warn("Data length is too small (" + data.length + ")");
+ return null;
+ }
+ byte elgEncr[] = new byte[514];
+ if (data.length > 514) {
+ System.arraycopy(data, 0, elgEncr, 0, 514);
+ } else {
+ System.arraycopy(data, 0, elgEncr, 514 - data.length, data.length);
+ }
+ byte elgDecr[] = _context.elGamalEngine().decrypt(elgEncr, targetPrivateKey);
+ if (elgDecr == null) {
+ //if (_log.shouldLog(Log.WARN))
+ // _log.warn("decrypt returned null", new Exception("decrypt failed"));
+ return null;
+ }
+
+ byte preIV[] = null;
+
+ int offset = 0;
+ byte key[] = new byte[SessionKey.KEYSIZE_BYTES];
+ System.arraycopy(elgDecr, offset, key, 0, SessionKey.KEYSIZE_BYTES);
+ offset += SessionKey.KEYSIZE_BYTES;
+ usedKey.setData(key);
+ preIV = new byte[32];
+ System.arraycopy(elgDecr, offset, preIV, 0, 32);
+ offset += 32;
+
+ //_log.debug("Pre IV for decryptNewSession: " + DataHelper.toString(preIV, 32));
+ //_log.debug("SessionKey for decryptNewSession: " + DataHelper.toString(key.getData(), 32));
+ Hash ivHash = _context.sha().calculateHash(preIV);
+ byte iv[] = new byte[16];
+ System.arraycopy(ivHash.getData(), 0, iv, 0, 16);
+
+ // feed the extra bytes into the PRNG
+ _context.random().harvester().feedEntropy("ElG/AES", elgDecr, offset, elgDecr.length - offset);
+
+ byte aesDecr[] = decryptAESBlock(data, 514, data.length-514, usedKey, iv, null, foundTags, foundKey);
+
+ //if (_log.shouldLog(Log.DEBUG))
+ // _log.debug("Decrypt with a NEW session successfull: # tags read = " + foundTags.size(),
+ // new Exception("Decrypted by"));
+ return aesDecr;
+ }
+
+ /**
+ * scenario 2:
+ * The data begins with 32 byte session tag, which also serves as the preIV.
+ * Then decrypt with AES using that session key and the first 16 bytes of the SHA256 of the pre-IV:
+ * - 2 byte integer specifying the # of session tags
+ * - that many 32 byte session tags
+ * - 4 byte integer specifying data.length
+ * - SHA256 of data
+ * - 1 byte flag that, if == 1, is followed by a new SessionKey
+ * - data
+ * - random bytes, padding the total size to greater than paddedSize with a mod 16 = 0
+ *
+ * If anything doesn't match up in decryption, it falls back to decryptNewSession
+ *
+ * @param foundTags set which is filled with any sessionTags found during decryption
+ * @param foundKey session key which may be filled with a new sessionKey found during decryption
+ *
+ */
+ byte[] decryptExistingSession(byte data[], SessionKey key, PrivateKey targetPrivateKey, Set foundTags,
+ SessionKey usedKey, SessionKey foundKey) throws DataFormatException {
+ byte preIV[] = new byte[32];
+ System.arraycopy(data, 0, preIV, 0, preIV.length);
+ Hash ivHash = _context.sha().calculateHash(preIV);
+ byte iv[] = new byte[16];
+ System.arraycopy(ivHash.getData(), 0, iv, 0, 16);
+
+ usedKey.setData(key.getData());
+
+ //_log.debug("Pre IV for decryptExistingSession: " + DataHelper.toString(preIV, 32));
+ //_log.debug("SessionKey for decryptNewSession: " + DataHelper.toString(key.getData(), 32));
+ byte decrypted[] = decryptAESBlock(data, 32, data.length-32, key, iv, preIV, foundTags, foundKey);
+ if (decrypted == null) {
+ // it begins with a valid session tag, but thats just a coincidence.
+ //if (_log.shouldLog(Log.DEBUG))
+ // _log.debug("Decrypt with a non session tag, but tags read: " + foundTags.size());
+ if (_log.shouldLog(Log.WARN))
+ _log.warn("Decrypting looks negative... existing key fails with existing tag, lets try as a new one");
+ byte rv[] = decryptNewSession(data, targetPrivateKey, foundTags, usedKey, foundKey);
+ if (_log.shouldLog(Log.WARN)) {
+ if (rv == null)
+ _log.warn("Decrypting failed with a known existing tag as either an existing message or a new session");
+ else
+ _log.warn("Decrypting suceeded as a new session, even though it used an existing tag!");
+ }
+ return rv;
+ }
+ // existing session decrypted successfully!
+ //if (_log.shouldLog(Log.DEBUG))
+ // _log.debug("Decrypt with an EXISTING session tag successfull, # tags read: " + foundTags.size(),
+ // new Exception("Decrypted by"));
+ return decrypted;
+ }
+
+ /**
+ * Decrypt the AES data with the session key and IV. The result should be:
+ * - 2 byte integer specifying the # of session tags
+ * - that many 32 byte session tags
+ * - 4 byte integer specifying data.length
+ * - SHA256 of data
+ * - 1 byte flag that, if == 1, is followed by a new SessionKey
+ * - data
+ * - random bytes, padding the total size to greater than paddedSize with a mod 16 = 0
+ *
+ * If anything doesn't match up in decryption, return null. Otherwise, return
+ * the decrypted data and update the session as necessary. If the sentTag is not null,
+ * consume it, but if it is null, record the keys, etc as part of a new session.
+ *
+ * @param foundTags set which is filled with any sessionTags found during decryption
+ * @param foundKey session key which may be filled with a new sessionKey found during decryption
+ */
+ byte[] decryptAESBlock(byte encrypted[], SessionKey key, byte iv[],
+ byte sentTag[], Set foundTags, SessionKey foundKey) throws DataFormatException {
+ return decryptAESBlock(encrypted, 0, encrypted.length, key, iv, sentTag, foundTags, foundKey);
+ }
+ byte[] decryptAESBlock(byte encrypted[], int offset, int encryptedLen, SessionKey key, byte iv[],
+ byte sentTag[], Set foundTags, SessionKey foundKey) throws DataFormatException {
+ //_log.debug("iv for decryption: " + DataHelper.toString(iv, 16));
+ //_log.debug("decrypting AES block. encr.length = " + (encrypted == null? -1 : encrypted.length) + " sentTag: " + DataHelper.toString(sentTag, 32));
+ byte decrypted[] = new byte[encryptedLen];
+ _context.aes().decrypt(encrypted, offset, decrypted, 0, key, iv, encryptedLen);
+ //Hash h = _context.sha().calculateHash(decrypted);
+ //_log.debug("Hash of entire aes block after decryption: \n" + DataHelper.toString(h.getData(), 32));
+ try {
+ SessionKey newKey = null;
+ Hash readHash = null;
+ List tags = null;
+
+ //ByteArrayInputStream bais = new ByteArrayInputStream(decrypted);
+ int cur = 0;
+ long numTags = DataHelper.fromLong(decrypted, cur, 2);
+ if (numTags > 0) tags = new ArrayList((int)numTags);
+ cur += 2;
+ //_log.debug("# tags: " + numTags);
+ if ((numTags < 0) || (numTags > 200)) throw new Exception("Invalid number of session tags");
+ if (numTags * SessionTag.BYTE_LENGTH > decrypted.length - 2) {
+ throw new Exception("# tags: " + numTags + " is too many for " + (decrypted.length - 2));
+ }
+ for (int i = 0; i < numTags; i++) {
+ byte tag[] = new byte[SessionTag.BYTE_LENGTH];
+ System.arraycopy(decrypted, cur, tag, 0, SessionTag.BYTE_LENGTH);
+ cur += SessionTag.BYTE_LENGTH;
+ tags.add(new SessionTag(tag));
+ }
+ long len = DataHelper.fromLong(decrypted, cur, 4);
+ cur += 4;
+ //_log.debug("len: " + len);
+ if ((len < 0) || (len > decrypted.length - cur - Hash.HASH_LENGTH - 1))
+ throw new Exception("Invalid size of payload (" + len + ", remaining " + (decrypted.length-cur) +")");
+ byte hashval[] = new byte[Hash.HASH_LENGTH];
+ System.arraycopy(decrypted, cur, hashval, 0, Hash.HASH_LENGTH);
+ cur += Hash.HASH_LENGTH;
+ readHash = new Hash();
+ readHash.setData(hashval);
+ byte flag = decrypted[cur++];
+ if (flag == 0x01) {
+ byte rekeyVal[] = new byte[SessionKey.KEYSIZE_BYTES];
+ System.arraycopy(decrypted, cur, rekeyVal, 0, SessionKey.KEYSIZE_BYTES);
+ cur += SessionKey.KEYSIZE_BYTES;
+ newKey = new SessionKey();
+ newKey.setData(rekeyVal);
+ }
+ byte unencrData[] = new byte[(int) len];
+ System.arraycopy(decrypted, cur, unencrData, 0, (int)len);
+ cur += len;
+ Hash calcHash = _context.sha().calculateHash(unencrData);
+ boolean eq = calcHash.equals(readHash);
+
+ if (eq) {
+ // everything matches. w00t.
+ if (tags != null)
+ foundTags.addAll(tags);
+ if (newKey != null) foundKey.setData(newKey.getData());
+ return unencrData;
+ }
+
+ throw new Exception("Hash does not match");
+ } catch (Exception e) {
+ if (_log.shouldLog(Log.WARN)) _log.warn("Unable to decrypt AES block", e);
+ return null;
+ }
+ }
+
+ /**
+ * Encrypt the unencrypted data to the target. The total size returned will be
+ * no less than the paddedSize parameter, but may be more. This method uses the
+ * ElGamal+AES algorithm in the data structure spec.
+ *
+ * @param target public key to which the data should be encrypted.
+ * @param key session key to use during encryption
+ * @param tagsForDelivery session tags to be associated with the key (or newKey if specified), or null
+ * @param currentTag sessionTag to use, or null if it should use ElG
+ * @param newKey key to be delivered to the target, with which the tagsForDelivery should be associated
+ * @param paddedSize minimum size in bytes of the body after padding it (if less than the
+ * body's real size, no bytes are appended but the body is not truncated)
+ */
+ public byte[] encrypt(byte data[], PublicKey target, SessionKey key, Set tagsForDelivery,
+ SessionTag currentTag, SessionKey newKey, long paddedSize) {
+ if (currentTag == null) {
+ if (_log.shouldLog(Log.INFO))
+ _log.info("Current tag is null, encrypting as new session", new Exception("encrypt new"));
+ _context.statManager().updateFrequency("crypto.elGamalAES.encryptNewSession");
+ return encryptNewSession(data, target, key, tagsForDelivery, newKey, paddedSize);
+ }
+ if (_log.shouldLog(Log.INFO))
+ _log.info("Current tag is NOT null, encrypting as existing session", new Exception("encrypt existing"));
+ _context.statManager().updateFrequency("crypto.elGamalAES.encryptExistingSession");
+ byte rv[] = encryptExistingSession(data, target, key, tagsForDelivery, currentTag, newKey, paddedSize);
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Existing session encrypted with tag: " + currentTag.toString() + ": " + rv.length + " bytes and key: " + key.toBase64() + ": " + Base64.encode(rv, 0, 64));
+ return rv;
+ }
+
+ /**
+ * Encrypt the data to the target using the given key and deliver the specified tags
+ */
+ public byte[] encrypt(byte data[], PublicKey target, SessionKey key, Set tagsForDelivery,
+ SessionTag currentTag, long paddedSize) {
+ return encrypt(data, target, key, tagsForDelivery, currentTag, null, paddedSize);
+ }
+
+ /**
+ * Encrypt the data to the target using the given key and deliver the specified tags
+ */
+ public byte[] encrypt(byte data[], PublicKey target, SessionKey key, Set tagsForDelivery, long paddedSize) {
+ return encrypt(data, target, key, tagsForDelivery, null, null, paddedSize);
+ }
+
+ /**
+ * Encrypt the data to the target using the given key delivering no tags
+ */
+ public byte[] encrypt(byte data[], PublicKey target, SessionKey key, long paddedSize) {
+ return encrypt(data, target, key, null, null, null, paddedSize);
+ }
+
+ /**
+ * scenario 1:
+ * Begin with 222 bytes, ElG encrypted, containing:
+ * - 32 byte SessionKey
+ * - 32 byte pre-IV for the AES
+ * - 158 bytes of random padding
+ * Then encrypt with AES using that session key and the first 16 bytes of the SHA256 of the pre-IV:
+ * - 2 byte integer specifying the # of session tags
+ * - that many 32 byte session tags
+ * - 4 byte integer specifying data.length
+ * - SHA256 of data
+ * - 1 byte flag that, if == 1, is followed by a new SessionKey
+ * - data
+ * - random bytes, padding the total size to greater than paddedSize with a mod 16 = 0
+ *
+ */
+ byte[] encryptNewSession(byte data[], PublicKey target, SessionKey key, Set tagsForDelivery,
+ SessionKey newKey, long paddedSize) {
+ //_log.debug("Encrypting to a NEW session");
+ byte elgSrcData[] = new byte[SessionKey.KEYSIZE_BYTES+32+158];
+ System.arraycopy(key.getData(), 0, elgSrcData, 0, SessionKey.KEYSIZE_BYTES);
+ byte preIV[] = new byte[32];
+ _context.random().nextBytes(preIV);
+ System.arraycopy(preIV, 0, elgSrcData, SessionKey.KEYSIZE_BYTES, 32);
+ byte rnd[] = new byte[158];
+ _context.random().nextBytes(rnd);
+ System.arraycopy(rnd, 0, elgSrcData, SessionKey.KEYSIZE_BYTES+32, 158);
+
+ //_log.debug("Pre IV for encryptNewSession: " + DataHelper.toString(preIV, 32));
+ //_log.debug("SessionKey for encryptNewSession: " + DataHelper.toString(key.getData(), 32));
+ long before = _context.clock().now();
+ byte elgEncr[] = _context.elGamalEngine().encrypt(elgSrcData, target);
+ long after = _context.clock().now();
+ if (_log.shouldLog(Log.INFO))
+ _log.info("elgEngine.encrypt of the session key took " + (after - before) + "ms");
+ if (elgEncr.length < 514) {
+ byte elg[] = new byte[514];
+ int diff = elg.length - elgEncr.length;
+ //if (_log.shouldLog(Log.DEBUG)) _log.debug("Difference in size: " + diff);
+ System.arraycopy(elgEncr, 0, elg, diff, elgEncr.length);
+ elgEncr = elg;
+ }
+ //_log.debug("ElGamal encrypted length: " + elgEncr.length + " elGamal source length: " + elgSrc.toByteArray().length);
+
+ // should we also feed the encrypted elG block into the harvester?
+
+ Hash ivHash = _context.sha().calculateHash(preIV);
+ byte iv[] = new byte[16];
+ System.arraycopy(ivHash.getData(), 0, iv, 0, 16);
+ byte aesEncr[] = encryptAESBlock(data, key, iv, tagsForDelivery, newKey, paddedSize);
+ //_log.debug("AES encrypted length: " + aesEncr.length);
+
+ byte rv[] = new byte[elgEncr.length + aesEncr.length];
+ System.arraycopy(elgEncr, 0, rv, 0, elgEncr.length);
+ System.arraycopy(aesEncr, 0, rv, elgEncr.length, aesEncr.length);
+ //_log.debug("Return length: " + rv.length);
+ long finish = _context.clock().now();
+ //if (_log.shouldLog(Log.DEBUG))
+ // _log.debug("after the elgEngine.encrypt took a total of " + (finish - after) + "ms");
+ return rv;
+ }
+
+ /**
+ * scenario 2:
+ * Begin with 32 byte session tag, which also serves as the preIV.
+ * Then encrypt with AES using that session key and the first 16 bytes of the SHA256 of the pre-IV:
+ * - 2 byte integer specifying the # of session tags
+ * - that many 32 byte session tags
+ * - 4 byte integer specifying data.length
+ * - SHA256 of data
+ * - 1 byte flag that, if == 1, is followed by a new SessionKey
+ * - data
+ * - random bytes, padding the total size to greater than paddedSize with a mod 16 = 0
+ *
+ */
+ byte[] encryptExistingSession(byte data[], PublicKey target, SessionKey key, Set tagsForDelivery,
+ SessionTag currentTag, SessionKey newKey, long paddedSize) {
+ //_log.debug("Encrypting to an EXISTING session");
+ byte rawTag[] = currentTag.getData();
+
+ //_log.debug("Pre IV for encryptExistingSession (aka tag): " + currentTag.toString());
+ //_log.debug("SessionKey for encryptNewSession: " + DataHelper.toString(key.getData(), 32));
+ Hash ivHash = _context.sha().calculateHash(rawTag);
+ byte iv[] = new byte[16];
+ System.arraycopy(ivHash.getData(), 0, iv, 0, 16);
+
+ byte aesEncr[] = encryptAESBlock(data, key, iv, tagsForDelivery, newKey, paddedSize, SessionTag.BYTE_LENGTH);
+ // that prepended SessionTag.BYTE_LENGTH bytes at the beginning of the buffer
+ System.arraycopy(rawTag, 0, aesEncr, 0, rawTag.length);
+ return aesEncr;
+ }
+
+ private final static Set EMPTY_SET = new HashSet();
+
+ /**
+ * For both scenarios, this method encrypts the AES area using the given key, iv
+ * and making sure the resulting data is at least as long as the paddedSize and
+ * also mod 16 bytes. The contents of the encrypted data is:
+ * - 2 byte integer specifying the # of session tags
+ * - that many 32 byte session tags
+ * - 4 byte integer specifying data.length
+ * - SHA256 of data
+ * - 1 byte flag that, if == 1, is followed by a new SessionKey
+ * - data
+ * - random bytes, padding the total size to greater than paddedSize with a mod 16 = 0
+ *
+ */
+ final byte[] encryptAESBlock(byte data[], SessionKey key, byte[] iv, Set tagsForDelivery, SessionKey newKey,
+ long paddedSize) {
+ return encryptAESBlock(data, key, iv, tagsForDelivery, newKey, paddedSize, 0);
+ }
+ final byte[] encryptAESBlock(byte data[], SessionKey key, byte[] iv, Set tagsForDelivery, SessionKey newKey,
+ long paddedSize, int prefixBytes) {
+ //_log.debug("iv for encryption: " + DataHelper.toString(iv, 16));
+ //_log.debug("Encrypting AES");
+ if (tagsForDelivery == null) tagsForDelivery = EMPTY_SET;
+ int size = 2 // sizeof(tags)
+ + tagsForDelivery.size()
+ + SessionTag.BYTE_LENGTH*tagsForDelivery.size()
+ + 4 // payload length
+ + Hash.HASH_LENGTH
+ + (newKey == null ? 1 : 1 + SessionKey.KEYSIZE_BYTES)
+ + data.length;
+ int totalSize = size + getPaddingSize(size, paddedSize);
+
+ byte aesData[] = new byte[totalSize + prefixBytes];
+
+ int cur = prefixBytes;
+ DataHelper.toLong(aesData, cur, 2, tagsForDelivery.size());
+ cur += 2;
+ for (Iterator iter = tagsForDelivery.iterator(); iter.hasNext();) {
+ SessionTag tag = (SessionTag) iter.next();
+ System.arraycopy(tag.getData(), 0, aesData, cur, SessionTag.BYTE_LENGTH);
+ cur += SessionTag.BYTE_LENGTH;
+ }
+ //_log.debug("# tags created, registered, and written: " + tagsForDelivery.size());
+ DataHelper.toLong(aesData, cur, 4, data.length);
+ cur += 4;
+ //_log.debug("data length: " + data.length);
+ Hash hash = _context.sha().calculateHash(data);
+ System.arraycopy(hash.getData(), 0, aesData, cur, Hash.HASH_LENGTH);
+ cur += Hash.HASH_LENGTH;
+
+ //_log.debug("hash of data: " + DataHelper.toString(hash.getData(), 32));
+ if (newKey == null) {
+ aesData[cur++] = 0x00; // don't rekey
+ //_log.debug("flag written");
+ } else {
+ aesData[cur++] = 0x01; // rekey
+ System.arraycopy(newKey.getData(), 0, aesData, cur, SessionKey.KEYSIZE_BYTES);
+ cur += SessionKey.KEYSIZE_BYTES;
+ }
+ System.arraycopy(data, 0, aesData, cur, data.length);
+ cur += data.length;
+
+ //_log.debug("raw data written: " + len);
+ byte padding[] = getPadding(_context, size, paddedSize);
+ //_log.debug("padding length: " + padding.length);
+ System.arraycopy(padding, 0, aesData, cur, padding.length);
+ cur += padding.length;
+
+ //Hash h = _context.sha().calculateHash(data);
+ //_log.debug("Hash of entire aes block before encryption: (len=" + data.length + ")\n" + DataHelper.toString(h.getData(), 32));
+ _context.aes().encrypt(aesData, prefixBytes, aesData, prefixBytes, key, iv, aesData.length - prefixBytes);
+ //_log.debug("Encrypted length: " + aesEncr.length);
+ //return aesEncr;
+ return aesData;
+ }
+
+ /**
+ * Return random bytes for padding the data to a mod 16 size so that it is
+ * at least minPaddedSize
+ *
+ */
+ final static byte[] getPadding(I2PAppContext context, int curSize, long minPaddedSize) {
+ int size = getPaddingSize(curSize, minPaddedSize);
+ byte rv[] = new byte[size];
+ context.random().nextBytes(rv);
+ return rv;
+ }
+ final static int getPaddingSize(int curSize, long minPaddedSize) {
+ int diff = 0;
+ if (curSize < minPaddedSize) {
+ diff = (int) minPaddedSize - curSize;
+ }
+
+ int numPadding = diff;
+ if (((curSize + diff) % 16) != 0) numPadding += (16 - ((curSize + diff) % 16));
+ return numPadding;
+ }
+
+ public static void main(String args[]) {
+ I2PAppContext ctx = new I2PAppContext();
+ ElGamalAESEngine e = new ElGamalAESEngine(ctx);
+ Object kp[] = ctx.keyGenerator().generatePKIKeypair();
+ PublicKey pubKey = (PublicKey)kp[0];
+ PrivateKey privKey = (PrivateKey)kp[1];
+ SessionKey sessionKey = ctx.keyGenerator().generateSessionKey();
+ for (int i = 0; i < 10; i++) {
+ try {
+ Set tags = new HashSet(5);
+ if (i == 0) {
+ for (int j = 0; j < 5; j++)
+ tags.add(new SessionTag(true));
+ }
+ byte encrypted[] = e.encrypt("blah".getBytes(), pubKey, sessionKey, tags, 1024);
+ byte decrypted[] = e.decrypt(encrypted, privKey);
+ if ("blah".equals(new String(decrypted))) {
+ System.out.println("equal on " + i);
+ } else {
+ System.out.println("NOT equal on " + i + ": " + new String(decrypted));
+ break;
+ }
+ ctx.sessionKeyManager().tagsDelivered(pubKey, sessionKey, tags);
+ } catch (Exception ee) {
+ ee.printStackTrace();
+ break;
+ }
+ }
+ }
+}
diff --git a/src/net/i2p/crypto/ElGamalEngine.java b/src/net/i2p/crypto/ElGamalEngine.java
new file mode 100644
index 0000000..bb7585b
--- /dev/null
+++ b/src/net/i2p/crypto/ElGamalEngine.java
@@ -0,0 +1,276 @@
+package net.i2p.crypto;
+
+/*
+ * Copyright (c) 2003, TheCrypto
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * - Redistributions of source code must retain the above copyright notice, this
+ * list of conditions and the following disclaimer.
+ * - Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * - Neither the name of the TheCrypto may be used to endorse or promote
+ * products derived from this software without specific prior written
+ * permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+import java.math.BigInteger;
+
+import net.i2p.I2PAppContext;
+import net.i2p.data.Base64;
+import net.i2p.data.DataHelper;
+import net.i2p.data.Hash;
+import net.i2p.data.PrivateKey;
+import net.i2p.data.PublicKey;
+import net.i2p.util.Clock;
+import net.i2p.util.Log;
+import net.i2p.util.NativeBigInteger;
+import net.i2p.util.RandomSource;
+
+/**
+ * Wrapper for ElGamal encryption/signature schemes.
+ *
+ * Does all of Elgamal now for data sizes of 223 bytes and less. The data to be
+ * encrypted is first prepended with a random nonzero byte, then the 32 bytes
+ * making up the SHA256 of the data, then the data itself. The random byte and
+ * the SHA256 hash is stripped on decrypt so the original data is returned.
+ *
+ * @author thecrypto, jrandom
+ */
+
+public class ElGamalEngine {
+ private Log _log;
+ private I2PAppContext _context;
+
+ /**
+ * The ElGamal engine should only be constructed and accessed through the
+ * application context. This constructor should only be used by the
+ * appropriate application context itself.
+ *
+ */
+ public ElGamalEngine(I2PAppContext context) {
+ context.statManager().createRateStat("crypto.elGamal.encrypt",
+ "how long does it take to do a full ElGamal encryption", "Encryption",
+ new long[] { 60 * 1000, 60 * 60 * 1000, 24 * 60 * 60 * 1000});
+ context.statManager().createRateStat("crypto.elGamal.decrypt",
+ "how long does it take to do a full ElGamal decryption", "Encryption",
+ new long[] { 60 * 1000, 60 * 60 * 1000, 24 * 60 * 60 * 1000});
+ _context = context;
+ _log = context.logManager().getLog(ElGamalEngine.class);
+ }
+
+ private ElGamalEngine() { // nop
+ }
+
+
+ private final static BigInteger _two = new NativeBigInteger(1, new byte[] { 0x02});
+
+ private BigInteger[] getNextYK() {
+ return YKGenerator.getNextYK();
+ }
+
+ /** encrypt the data to the public key
+ * @return encrypted data
+ * @param publicKey public key encrypt to
+ * @param data data to encrypt
+ */
+ public byte[] encrypt(byte data[], PublicKey publicKey) {
+ if ((data == null) || (data.length >= 223))
+ throw new IllegalArgumentException("Data to encrypt must be < 223 bytes at the moment");
+ if (publicKey == null) throw new IllegalArgumentException("Null public key specified");
+
+ long start = _context.clock().now();
+
+ byte d2[] = new byte[1+Hash.HASH_LENGTH+data.length];
+ d2[0] = (byte)0xFF;
+ Hash hash = _context.sha().calculateHash(data);
+ System.arraycopy(hash.getData(), 0, d2, 1, Hash.HASH_LENGTH);
+ System.arraycopy(data, 0, d2, 1+Hash.HASH_LENGTH, data.length);
+
+ long t0 = _context.clock().now();
+ BigInteger m = new NativeBigInteger(1, d2);
+ long t1 = _context.clock().now();
+ if (m.compareTo(CryptoConstants.elgp) >= 0)
+ throw new IllegalArgumentException("ARGH. Data cannot be larger than the ElGamal prime. FIXME");
+ long t2 = _context.clock().now();
+ BigInteger aalpha = new NativeBigInteger(1, publicKey.getData());
+ long t3 = _context.clock().now();
+ BigInteger yk[] = getNextYK();
+ BigInteger k = yk[1];
+ BigInteger y = yk[0];
+
+ long t7 = _context.clock().now();
+ BigInteger d = aalpha.modPow(k, CryptoConstants.elgp);
+ long t8 = _context.clock().now();
+ d = d.multiply(m);
+ long t9 = _context.clock().now();
+ d = d.mod(CryptoConstants.elgp);
+ long t10 = _context.clock().now();
+
+ byte[] ybytes = y.toByteArray();
+ byte[] dbytes = d.toByteArray();
+ byte[] out = new byte[514];
+ System.arraycopy(ybytes, 0, out, (ybytes.length < 257 ? 257 - ybytes.length : 0),
+ (ybytes.length > 257 ? 257 : ybytes.length));
+ System.arraycopy(dbytes, 0, out, (dbytes.length < 257 ? 514 - dbytes.length : 257),
+ (dbytes.length > 257 ? 257 : dbytes.length));
+ /*
+ StringBuffer buf = new StringBuffer(1024);
+ buf.append("Timing\n");
+ buf.append("0-1: ").append(t1 - t0).append('\n');
+ buf.append("1-2: ").append(t2 - t1).append('\n');
+ buf.append("2-3: ").append(t3 - t2).append('\n');
+ //buf.append("3-4: ").append(t4-t3).append('\n');
+ //buf.append("4-5: ").append(t5-t4).append('\n');
+ //buf.append("5-6: ").append(t6-t5).append('\n');
+ //buf.append("6-7: ").append(t7-t6).append('\n');
+ buf.append("7-8: ").append(t8 - t7).append('\n');
+ buf.append("8-9: ").append(t9 - t8).append('\n');
+ buf.append("9-10: ").append(t10 - t9).append('\n');
+ //_log.debug(buf.toString());
+ */
+ long end = _context.clock().now();
+
+ long diff = end - start;
+ if (diff > 1000) {
+ if (_log.shouldLog(Log.WARN)) _log.warn("Took too long to encrypt ElGamal block (" + diff + "ms)");
+ }
+
+ _context.statManager().addRateData("crypto.elGamal.encrypt", diff, diff);
+ return out;
+ }
+
+ /** Decrypt the data
+ * @param encrypted encrypted data
+ * @param privateKey private key to decrypt with
+ * @return unencrypted data
+ */
+ public byte[] decrypt(byte encrypted[], PrivateKey privateKey) {
+ if ((encrypted == null) || (encrypted.length > 514))
+ throw new IllegalArgumentException("Data to decrypt must be <= 514 bytes at the moment");
+ long start = _context.clock().now();
+
+ byte[] ybytes = new byte[257];
+ byte[] dbytes = new byte[257];
+ System.arraycopy(encrypted, 0, ybytes, 0, 257);
+ System.arraycopy(encrypted, 257, dbytes, 0, 257);
+ BigInteger y = new NativeBigInteger(1, ybytes);
+ BigInteger d = new NativeBigInteger(1, dbytes);
+ BigInteger a = new NativeBigInteger(1, privateKey.getData());
+ BigInteger y1p = CryptoConstants.elgp.subtract(BigInteger.ONE).subtract(a);
+ BigInteger ya = y.modPow(y1p, CryptoConstants.elgp);
+ BigInteger m = ya.multiply(d);
+ m = m.mod(CryptoConstants.elgp);
+ byte val[] = m.toByteArray();
+ int i = 0;
+ for (i = 0; i < val.length; i++)
+ if (val[i] != (byte) 0x00) break;
+
+ //ByteArrayInputStream bais = new ByteArrayInputStream(val, i, val.length - i);
+ byte hashData[] = new byte[Hash.HASH_LENGTH];
+ System.arraycopy(val, i + 1, hashData, 0, Hash.HASH_LENGTH);
+ Hash hash = new Hash(hashData);
+ int payloadLen = val.length - i - 1 - Hash.HASH_LENGTH;
+ if (payloadLen < 0) {
+ if (_log.shouldLog(Log.ERROR))
+ _log.error("Decrypted data is too small (" + (val.length - i)+ ")");
+ return null;
+ }
+ byte rv[] = new byte[payloadLen];
+ System.arraycopy(val, i + 1 + Hash.HASH_LENGTH, rv, 0, rv.length);
+
+ Hash calcHash = _context.sha().calculateHash(rv);
+ boolean ok = calcHash.equals(hash);
+
+ long end = _context.clock().now();
+
+ long diff = end - start;
+ if (diff > 1000) {
+ if (_log.shouldLog(Log.WARN))
+ _log.warn("Took too long to decrypt and verify ElGamal block (" + diff + "ms)");
+ }
+
+ _context.statManager().addRateData("crypto.elGamal.decrypt", diff, diff);
+
+ if (ok) {
+ //_log.debug("Hash matches: " + DataHelper.toString(hash.getData(), hash.getData().length));
+ return rv;
+ }
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Doesn't match hash [sent hash=" + hash + "]\ndata = "
+ + Base64.encode(rv), new Exception("Doesn't match"));
+ return null;
+ }
+
+ public static void main(String args[]) {
+ long eTime = 0;
+ long dTime = 0;
+ long gTime = 0;
+ int numRuns = 100;
+ if (args.length > 0) try {
+ numRuns = Integer.parseInt(args[0]);
+ } catch (NumberFormatException nfe) { // nop
+ }
+
+ try {
+ Thread.sleep(30 * 1000);
+ } catch (InterruptedException ie) { // nop
+ }
+
+ RandomSource.getInstance().nextBoolean();
+ I2PAppContext context = new I2PAppContext();
+
+ System.out.println("Running " + numRuns + " times");
+
+ for (int i = 0; i < numRuns; i++) {
+ long startG = Clock.getInstance().now();
+ Object pair[] = KeyGenerator.getInstance().generatePKIKeypair();
+ long endG = Clock.getInstance().now();
+
+ PublicKey pubkey = (PublicKey) pair[0];
+ PrivateKey privkey = (PrivateKey) pair[1];
+ byte buf[] = new byte[128];
+ RandomSource.getInstance().nextBytes(buf);
+ long startE = Clock.getInstance().now();
+ byte encr[] = context.elGamalEngine().encrypt(buf, pubkey);
+ long endE = Clock.getInstance().now();
+ byte decr[] = context.elGamalEngine().decrypt(encr, privkey);
+ long endD = Clock.getInstance().now();
+ eTime += endE - startE;
+ dTime += endD - endE;
+ gTime += endG - startG;
+
+ if (!DataHelper.eq(decr, buf)) {
+ System.out.println("PublicKey : " + DataHelper.toString(pubkey.getData(), pubkey.getData().length));
+ System.out.println("PrivateKey : " + DataHelper.toString(privkey.getData(), privkey.getData().length));
+ System.out.println("orig : " + DataHelper.toString(buf, buf.length));
+ System.out.println("d(e(orig) : " + DataHelper.toString(decr, decr.length));
+ System.out.println("orig.len : " + buf.length);
+ System.out.println("d(e(orig).len : " + decr.length);
+ System.out.println("Not equal!");
+ System.exit(0);
+ } else {
+ System.out.println("*Run " + i + " is successful, with encr.length = " + encr.length + " [E: "
+ + (endE - startE) + " D: " + (endD - endE) + " G: " + (endG - startG) + "]\n");
+ }
+ }
+ System.out.println("\n\nAll " + numRuns + " tests successful, average encryption time: " + (eTime / numRuns)
+ + " average decryption time: " + (dTime / numRuns) + " average key generation time: "
+ + (gTime / numRuns));
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/crypto/EntropyHarvester.java b/src/net/i2p/crypto/EntropyHarvester.java
new file mode 100644
index 0000000..a635d6b
--- /dev/null
+++ b/src/net/i2p/crypto/EntropyHarvester.java
@@ -0,0 +1,30 @@
+package net.i2p.crypto;
+
+/**
+ * Allow various components with some entropy to feed that entropy back
+ * into some PRNG. The quality of the entropy provided varies, so anything
+ * harvesting should discriminate based on the offered "source" of the
+ * entropy, silently discarding insufficient entropy sources.
+ *
+ */
+public interface EntropyHarvester {
+ /**
+ * Feed the entropy pools with data[offset:offset+len]
+ *
+ * @param source origin of the entropy, allowing the harvester to
+ * determine how much to value the data
+ * @param offset index into the data array to start
+ * @param len how many bytes to use
+ */
+ void feedEntropy(String source, byte data[], int offset, int len);
+ /**
+ * Feed the entropy pools with the bits in the data
+ *
+ * @param source origin of the entropy, allowing the harvester to
+ * determine how much to value the data
+ * @param bitoffset bit index into the data array to start
+ * (using java standard big-endian)
+ * @param bits how many bits to use
+ */
+ void feedEntropy(String source, long data, int bitoffset, int bits);
+}
diff --git a/src/net/i2p/crypto/HMAC256Generator.java b/src/net/i2p/crypto/HMAC256Generator.java
new file mode 100644
index 0000000..2d10629
--- /dev/null
+++ b/src/net/i2p/crypto/HMAC256Generator.java
@@ -0,0 +1,51 @@
+package net.i2p.crypto;
+
+import gnu.crypto.hash.Sha256Standalone;
+import net.i2p.I2PAppContext;
+import net.i2p.data.Base64;
+import net.i2p.data.Hash;
+import net.i2p.data.SessionKey;
+import org.bouncycastle.crypto.Digest;
+import org.bouncycastle.crypto.macs.HMac;
+
+/**
+ * Calculate the HMAC-SHA256 of a key+message. All the good stuff occurs
+ * in {@link org.bouncycastle.crypto.macs.HMac} and
+ * {@link org.bouncycastle.crypto.digests.MD5Digest}.
+ *
+ */
+public class HMAC256Generator extends HMACGenerator {
+ public HMAC256Generator(I2PAppContext context) { super(context); }
+
+ protected HMac acquire() {
+ synchronized (_available) {
+ if (_available.size() > 0)
+ return (HMac)_available.remove(0);
+ }
+ // the HMAC is hardcoded to use SHA256 digest size
+ // for backwards compatability. next time we have a backwards
+ // incompatible change, we should update this by removing ", 32"
+ return new HMac(new Sha256ForMAC());
+ }
+
+ private class Sha256ForMAC extends Sha256Standalone implements Digest {
+ public String getAlgorithmName() { return "sha256 for hmac"; }
+ public int getDigestSize() { return 32; }
+ public int doFinal(byte[] out, int outOff) {
+ byte rv[] = digest();
+ System.arraycopy(rv, 0, out, outOff, rv.length);
+ reset();
+ return rv.length;
+ }
+
+ }
+
+ public static void main(String args[]) {
+ I2PAppContext ctx = I2PAppContext.getGlobalContext();
+ byte data[] = new byte[64];
+ ctx.random().nextBytes(data);
+ SessionKey key = ctx.keyGenerator().generateSessionKey();
+ Hash mac = ctx.hmac256().calculate(key, data);
+ System.out.println(Base64.encode(mac.getData()));
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/crypto/HMACGenerator.java b/src/net/i2p/crypto/HMACGenerator.java
new file mode 100644
index 0000000..fa853df
--- /dev/null
+++ b/src/net/i2p/crypto/HMACGenerator.java
@@ -0,0 +1,124 @@
+package net.i2p.crypto;
+
+import java.util.Arrays;
+import java.util.ArrayList;
+import java.util.List;
+import net.i2p.I2PAppContext;
+import net.i2p.data.DataHelper;
+import net.i2p.data.Hash;
+import net.i2p.data.SessionKey;
+
+import org.bouncycastle.crypto.digests.MD5Digest;
+import org.bouncycastle.crypto.macs.HMac;
+
+/**
+ * Calculate the HMAC-MD5 of a key+message. All the good stuff occurs
+ * in {@link org.bouncycastle.crypto.macs.HMac} and
+ * {@link org.bouncycastle.crypto.digests.MD5Digest}.
+ *
+ */
+public class HMACGenerator {
+ private I2PAppContext _context;
+ /** set of available HMAC instances for calculate */
+ protected List _available;
+ /** set of available byte[] buffers for verify */
+ private List _availableTmp;
+
+ public HMACGenerator(I2PAppContext context) {
+ _context = context;
+ _available = new ArrayList(32);
+ _availableTmp = new ArrayList(32);
+ }
+
+ /**
+ * Calculate the HMAC of the data with the given key
+ */
+ public Hash calculate(SessionKey key, byte data[]) {
+ if ((key == null) || (key.getData() == null) || (data == null))
+ throw new NullPointerException("Null arguments for HMAC");
+ byte rv[] = new byte[Hash.HASH_LENGTH];
+ calculate(key, data, 0, data.length, rv, 0);
+ return new Hash(rv);
+ }
+
+ /**
+ * Calculate the HMAC of the data with the given key
+ */
+ public void calculate(SessionKey key, byte data[], int offset, int length, byte target[], int targetOffset) {
+ if ((key == null) || (key.getData() == null) || (data == null))
+ throw new NullPointerException("Null arguments for HMAC");
+
+ HMac mac = acquire();
+ mac.init(key.getData());
+ mac.update(data, offset, length);
+ //byte rv[] = new byte[Hash.HASH_LENGTH];
+ mac.doFinal(target, targetOffset);
+ release(mac);
+ //return new Hash(rv);
+ }
+
+ /**
+ * Verify the MAC inline, reducing some unnecessary memory churn.
+ *
+ * @param key session key to verify the MAC with
+ * @param curData MAC to verify
+ * @param curOffset index into curData to MAC
+ * @param curLength how much data in curData do we want to run the HMAC over
+ * @param origMAC what do we expect the MAC of curData to equal
+ * @param origMACOffset index into origMAC
+ * @param origMACLength how much of the MAC do we want to verify
+ */
+ public boolean verify(SessionKey key, byte curData[], int curOffset, int curLength, byte origMAC[], int origMACOffset, int origMACLength) {
+ if ((key == null) || (key.getData() == null) || (curData == null))
+ throw new NullPointerException("Null arguments for HMAC");
+
+ HMac mac = acquire();
+ mac.init(key.getData());
+ mac.update(curData, curOffset, curLength);
+ byte rv[] = acquireTmp();
+ //byte rv[] = new byte[Hash.HASH_LENGTH];
+ mac.doFinal(rv, 0);
+ release(mac);
+
+ boolean eq = DataHelper.eq(rv, 0, origMAC, origMACOffset, origMACLength);
+ releaseTmp(rv);
+ return eq;
+ }
+
+ protected HMac acquire() {
+ synchronized (_available) {
+ if (_available.size() > 0)
+ return (HMac)_available.remove(0);
+ }
+ // the HMAC is hardcoded to use SHA256 digest size
+ // for backwards compatability. next time we have a backwards
+ // incompatible change, we should update this by removing ", 32"
+ return new HMac(new MD5Digest(), 32);
+ }
+ private void release(HMac mac) {
+ synchronized (_available) {
+ if (_available.size() < 64)
+ _available.add(mac);
+ }
+ }
+
+ // temp buffers for verify(..)
+ private byte[] acquireTmp() {
+ byte rv[] = null;
+ synchronized (_availableTmp) {
+ if (_availableTmp.size() > 0)
+ rv = (byte[])_availableTmp.remove(0);
+ }
+ if (rv != null)
+ Arrays.fill(rv, (byte)0x0);
+ else
+ rv = new byte[Hash.HASH_LENGTH];
+ return rv;
+ }
+ private void releaseTmp(byte tmp[]) {
+ synchronized (_availableTmp) {
+ if (_availableTmp.size() < 64)
+ _availableTmp.add((Object)tmp);
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/crypto/KeyGenerator.java b/src/net/i2p/crypto/KeyGenerator.java
new file mode 100644
index 0000000..a221f9e
--- /dev/null
+++ b/src/net/i2p/crypto/KeyGenerator.java
@@ -0,0 +1,227 @@
+package net.i2p.crypto;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import gnu.crypto.hash.Sha256Standalone;
+import java.math.BigInteger;
+
+import net.i2p.I2PAppContext;
+import net.i2p.data.Base64;
+import net.i2p.data.DataHelper;
+import net.i2p.data.Hash;
+import net.i2p.data.PrivateKey;
+import net.i2p.data.PublicKey;
+import net.i2p.data.SessionKey;
+import net.i2p.data.Signature;
+import net.i2p.data.SigningPrivateKey;
+import net.i2p.data.SigningPublicKey;
+import net.i2p.util.Clock;
+import net.i2p.util.Log;
+import net.i2p.util.NativeBigInteger;
+import net.i2p.util.RandomSource;
+
+/** Define a way of generating asymetrical key pairs as well as symetrical keys
+ * @author jrandom
+ */
+public class KeyGenerator {
+ private Log _log;
+ private I2PAppContext _context;
+
+ public KeyGenerator(I2PAppContext context) {
+ _log = context.logManager().getLog(KeyGenerator.class);
+ _context = context;
+ }
+ public static KeyGenerator getInstance() {
+ return I2PAppContext.getGlobalContext().keyGenerator();
+ }
+
+
+
+ /** Generate a private 256 bit session key
+ * @return session key
+ */
+ public SessionKey generateSessionKey() {
+ // 256bit random # as a session key
+ SessionKey key = new SessionKey();
+ byte data[] = new byte[SessionKey.KEYSIZE_BYTES];
+ _context.random().nextBytes(data);
+ key.setData(data);
+ return key;
+ }
+
+ private static final int PBE_ROUNDS = 1000;
+ /** PBE the passphrase with the salt */
+ public SessionKey generateSessionKey(byte salt[], byte passphrase[]) {
+ byte salted[] = new byte[16+passphrase.length];
+ System.arraycopy(salt, 0, salted, 0, Math.min(salt.length, 16));
+ System.arraycopy(passphrase, 0, salted, 16, passphrase.length);
+ byte h[] = _context.sha().calculateHash(salted).getData();
+ for (int i = 1; i < PBE_ROUNDS; i++)
+ _context.sha().calculateHash(h, 0, Hash.HASH_LENGTH, h, 0);
+ return new SessionKey(h);
+ }
+
+ /** standard exponent size */
+ private static final int PUBKEY_EXPONENT_SIZE_FULL = 2048;
+ /**
+ * short exponent size, which should be safe for use with the Oakley primes,
+ * per "On Diffie-Hellman Key Agreement with Short Exponents" - van Oorschot, Weiner
+ * at EuroCrypt 96, and crypto++'s benchmarks at http://www.eskimo.com/~weidai/benchmarks.html
+ * Also, "Koshiba & Kurosawa: Short Exponent Diffie-Hellman Problems" (PKC 2004, LNCS 2947, pp. 173-186)
+ * aparently supports this, according to
+ * http://groups.google.com/group/sci.crypt/browse_thread/thread/1855a5efa7416677/339fa2f945cc9ba0#339fa2f945cc9ba0
+ * (damn commercial access to http://www.springerlink.com/(xrkdvv45w0cmnur4aimsxx55)/app/home/contribution.asp?referrer=parent&backto=issue,13,31;journal,893,3280;linkingpublicationresults,1:105633,1 )
+ */
+ private static final int PUBKEY_EXPONENT_SIZE_SHORT = 226;
+ public static final int PUBKEY_EXPONENT_SIZE = PUBKEY_EXPONENT_SIZE_SHORT;
+
+ /** Generate a pair of keys, where index 0 is a PublicKey, and
+ * index 1 is a PrivateKey
+ * @return pair of keys
+ */
+ public Object[] generatePKIKeypair() {
+ BigInteger a = new NativeBigInteger(PUBKEY_EXPONENT_SIZE, _context.random());
+ BigInteger aalpha = CryptoConstants.elgg.modPow(a, CryptoConstants.elgp);
+
+ Object[] keys = new Object[2];
+ keys[0] = new PublicKey();
+ keys[1] = new PrivateKey();
+ byte[] k0 = aalpha.toByteArray();
+ byte[] k1 = a.toByteArray();
+
+ // bigInteger.toByteArray returns SIGNED integers, but since they'return positive,
+ // signed two's complement is the same as unsigned
+
+ ((PublicKey) keys[0]).setData(padBuffer(k0, PublicKey.KEYSIZE_BYTES));
+ ((PrivateKey) keys[1]).setData(padBuffer(k1, PrivateKey.KEYSIZE_BYTES));
+
+ return keys;
+ }
+
+ /** Convert a PrivateKey to its corresponding PublicKey
+ * @param priv PrivateKey object
+ * @return the corresponding PublicKey object
+ */
+ public static PublicKey getPublicKey(PrivateKey priv) {
+ BigInteger a = new NativeBigInteger(1, priv.toByteArray());
+ BigInteger aalpha = CryptoConstants.elgg.modPow(a, CryptoConstants.elgp);
+ PublicKey pub = new PublicKey();
+ byte [] pubBytes = aalpha.toByteArray();
+ pub.setData(padBuffer(pubBytes, PublicKey.KEYSIZE_BYTES));
+ return pub;
+ }
+
+ /** Generate a pair of DSA keys, where index 0 is a SigningPublicKey, and
+ * index 1 is a SigningPrivateKey
+ * @return pair of keys
+ */
+ public Object[] generateSigningKeypair() {
+ Object[] keys = new Object[2];
+ BigInteger x = null;
+
+ // make sure the random key is less than the DSA q
+ do {
+ x = new NativeBigInteger(160, _context.random());
+ } while (x.compareTo(CryptoConstants.dsaq) >= 0);
+
+ BigInteger y = CryptoConstants.dsag.modPow(x, CryptoConstants.dsap);
+ keys[0] = new SigningPublicKey();
+ keys[1] = new SigningPrivateKey();
+ byte k0[] = padBuffer(y.toByteArray(), SigningPublicKey.KEYSIZE_BYTES);
+ byte k1[] = padBuffer(x.toByteArray(), SigningPrivateKey.KEYSIZE_BYTES);
+
+ ((SigningPublicKey) keys[0]).setData(k0);
+ ((SigningPrivateKey) keys[1]).setData(k1);
+ return keys;
+ }
+
+ /** Convert a SigningPrivateKey to a SigningPublicKey
+ * @param priv a SigningPrivateKey object
+ * @return a SigningPublicKey object
+ */
+ public static SigningPublicKey getSigningPublicKey(SigningPrivateKey priv) {
+ BigInteger x = new NativeBigInteger(1, priv.toByteArray());
+ BigInteger y = CryptoConstants.dsag.modPow(x, CryptoConstants.dsap);
+ SigningPublicKey pub = new SigningPublicKey();
+ byte [] pubBytes = padBuffer(y.toByteArray(), SigningPublicKey.KEYSIZE_BYTES);
+ pub.setData(pubBytes);
+ return pub;
+ }
+
+ /**
+ * Pad the buffer w/ leading 0s or trim off leading bits so the result is the
+ * given length.
+ */
+ final static byte[] padBuffer(byte src[], int length) {
+ byte buf[] = new byte[length];
+
+ if (src.length > buf.length) // extra bits, chop leading bits
+ System.arraycopy(src, src.length - buf.length, buf, 0, buf.length);
+ else if (src.length < buf.length) // short bits, padd w/ 0s
+ System.arraycopy(src, 0, buf, buf.length - src.length, src.length);
+ else
+ // eq
+ System.arraycopy(src, 0, buf, 0, buf.length);
+
+ return buf;
+ }
+
+ public static void main(String args[]) {
+ Log log = new Log("keygenTest");
+ RandomSource.getInstance().nextBoolean();
+ byte src[] = new byte[200];
+ RandomSource.getInstance().nextBytes(src);
+
+ I2PAppContext ctx = new I2PAppContext();
+ long time = 0;
+ for (int i = 0; i < 10; i++) {
+ long start = Clock.getInstance().now();
+ Object keys[] = KeyGenerator.getInstance().generatePKIKeypair();
+ long end = Clock.getInstance().now();
+ byte ctext[] = ctx.elGamalEngine().encrypt(src, (PublicKey) keys[0]);
+ byte ptext[] = ctx.elGamalEngine().decrypt(ctext, (PrivateKey) keys[1]);
+ time += end - start;
+ if (DataHelper.eq(ptext, src))
+ log.debug("D(E(data)) == data");
+ else
+ log.error("D(E(data)) != data!!!!!!");
+ }
+ log.info("Keygen 10 times: " + time + "ms");
+
+ Object obj[] = KeyGenerator.getInstance().generateSigningKeypair();
+ SigningPublicKey fake = (SigningPublicKey) obj[0];
+ time = 0;
+ for (int i = 0; i < 10; i++) {
+ long start = Clock.getInstance().now();
+ Object keys[] = KeyGenerator.getInstance().generateSigningKeypair();
+ long end = Clock.getInstance().now();
+ Signature sig = DSAEngine.getInstance().sign(src, (SigningPrivateKey) keys[1]);
+ boolean ok = DSAEngine.getInstance().verifySignature(sig, src, (SigningPublicKey) keys[0]);
+ boolean fakeOk = DSAEngine.getInstance().verifySignature(sig, src, fake);
+ time += end - start;
+ log.debug("V(S(data)) == " + ok + " fake verify correctly failed? " + (fakeOk == false));
+ }
+ log.info("Signing Keygen 10 times: " + time + "ms");
+
+ time = 0;
+ for (int i = 0; i < 1000; i++) {
+ long start = Clock.getInstance().now();
+ KeyGenerator.getInstance().generateSessionKey();
+ long end = Clock.getInstance().now();
+ time += end - start;
+ }
+ log.info("Session keygen 1000 times: " + time + "ms");
+
+ try {
+ Thread.sleep(5000);
+ } catch (InterruptedException ie) { // nop
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/crypto/PersistentSessionKeyManager.java b/src/net/i2p/crypto/PersistentSessionKeyManager.java
new file mode 100644
index 0000000..811e6e4
--- /dev/null
+++ b/src/net/i2p/crypto/PersistentSessionKeyManager.java
@@ -0,0 +1,190 @@
+package net.i2p.crypto;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.Date;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Set;
+
+import net.i2p.I2PAppContext;
+import net.i2p.data.DataFormatException;
+import net.i2p.data.DataHelper;
+import net.i2p.data.PublicKey;
+import net.i2p.data.SessionKey;
+import net.i2p.data.SessionTag;
+import net.i2p.util.Log;
+
+/**
+ * Expose the functionality to allow people to write out and read in the
+ * session key and session tag information via streams. This implementation
+ * does not write anywhere except where its told.
+ *
+ */
+public class PersistentSessionKeyManager extends TransientSessionKeyManager {
+ private Log _log;
+ private Object _yk = YKGenerator.class;
+
+
+ /**
+ * The session key manager should only be constructed and accessed through the
+ * application context. This constructor should only be used by the
+ * appropriate application context itself.
+ *
+ */
+ public PersistentSessionKeyManager(I2PAppContext context) {
+ super(context);
+ _log = context.logManager().getLog(PersistentSessionKeyManager.class);
+ }
+ private PersistentSessionKeyManager() {
+ this(null);
+ }
+ /**
+ * Write the session key data to the given stream
+ *
+ */
+ public void saveState(OutputStream out) throws IOException, DataFormatException {
+ if (true) return;
+
+ Set tagSets = getInboundTagSets();
+ Set sessions = getOutboundSessions();
+ if (_log.shouldLog(Log.INFO))
+ _log.info("Saving state with " + tagSets.size() + " inbound tagSets and "
+ + sessions.size() + " outbound sessions");
+
+ DataHelper.writeLong(out, 4, tagSets.size());
+ for (Iterator iter = tagSets.iterator(); iter.hasNext();) {
+ TagSet ts = (TagSet) iter.next();
+ writeTagSet(out, ts);
+ }
+ DataHelper.writeLong(out, 4, sessions.size());
+ for (Iterator iter = sessions.iterator(); iter.hasNext();) {
+ OutboundSession sess = (OutboundSession) iter.next();
+ writeOutboundSession(out, sess);
+ }
+ }
+
+ /**
+ * Load the session key data from the given stream
+ *
+ */
+ public void loadState(InputStream in) throws IOException, DataFormatException {
+ int inboundSets = (int) DataHelper.readLong(in, 4);
+ Set tagSets = new HashSet(inboundSets);
+ for (int i = 0; i < inboundSets; i++) {
+ TagSet ts = readTagSet(in);
+ tagSets.add(ts);
+ }
+ int outboundSessions = (int) DataHelper.readLong(in, 4);
+ Set sessions = new HashSet(outboundSessions);
+ for (int i = 0; i < outboundSessions; i++) {
+ OutboundSession sess = readOutboundSession(in);
+ sessions.add(sess);
+ }
+
+ if (_log.shouldLog(Log.INFO))
+ _log.info("Loading state with " + tagSets.size() + " inbound tagSets and "
+ + sessions.size() + " outbound sessions");
+ setData(tagSets, sessions);
+ }
+
+ private void writeOutboundSession(OutputStream out, OutboundSession sess) throws IOException, DataFormatException {
+ sess.getTarget().writeBytes(out);
+ sess.getCurrentKey().writeBytes(out);
+ DataHelper.writeDate(out, new Date(sess.getEstablishedDate()));
+ DataHelper.writeDate(out, new Date(sess.getLastUsedDate()));
+ List sets = sess.getTagSets();
+ DataHelper.writeLong(out, 2, sets.size());
+ for (Iterator iter = sets.iterator(); iter.hasNext();) {
+ TagSet set = (TagSet) iter.next();
+ writeTagSet(out, set);
+ }
+ }
+
+ private void writeTagSet(OutputStream out, TagSet ts) throws IOException, DataFormatException {
+ ts.getAssociatedKey().writeBytes(out);
+ DataHelper.writeDate(out, new Date(ts.getDate()));
+ DataHelper.writeLong(out, 2, ts.getTags().size());
+ for (Iterator iter = ts.getTags().iterator(); iter.hasNext();) {
+ SessionTag tag = (SessionTag) iter.next();
+ out.write(tag.getData());
+ }
+ }
+
+ private OutboundSession readOutboundSession(InputStream in) throws IOException, DataFormatException {
+ PublicKey key = new PublicKey();
+ key.readBytes(in);
+ SessionKey skey = new SessionKey();
+ skey.readBytes(in);
+ Date established = DataHelper.readDate(in);
+ Date lastUsed = DataHelper.readDate(in);
+ int tagSets = (int) DataHelper.readLong(in, 2);
+ ArrayList sets = new ArrayList(tagSets);
+ for (int i = 0; i < tagSets; i++) {
+ TagSet ts = readTagSet(in);
+ sets.add(ts);
+ }
+
+ return new OutboundSession(key, skey, established.getTime(), lastUsed.getTime(), sets);
+ }
+
+ private TagSet readTagSet(InputStream in) throws IOException, DataFormatException {
+ SessionKey key = new SessionKey();
+ key.readBytes(in);
+ Date date = DataHelper.readDate(in);
+ int numTags = (int) DataHelper.readLong(in, 2);
+ Set tags = new HashSet(numTags);
+ for (int i = 0; i < numTags; i++) {
+ SessionTag tag = new SessionTag();
+ byte val[] = new byte[SessionTag.BYTE_LENGTH];
+ int read = DataHelper.read(in, val);
+ if (read != SessionTag.BYTE_LENGTH)
+ throw new IOException("Unable to fully read a session tag [" + read + " not " + SessionTag.BYTE_LENGTH
+ + ")");
+ tag.setData(val);
+ tags.add(tag);
+ }
+ TagSet ts = new TagSet(tags, key, _context.clock().now());
+ ts.setDate(date.getTime());
+ return ts;
+ }
+
+ public static void main(String args[]) {
+ I2PAppContext ctx = new I2PAppContext();
+ Log log = ctx.logManager().getLog(PersistentSessionKeyManager.class);
+ PersistentSessionKeyManager mgr = (PersistentSessionKeyManager)ctx.sessionKeyManager();
+ try {
+ mgr.loadState(new FileInputStream("sessionKeys.dat"));
+ String state = mgr.renderStatusHTML();
+ FileOutputStream fos = new FileOutputStream("sessionKeysBeforeExpire.html");
+ fos.write(state.getBytes());
+ fos.close();
+ int expired = mgr.aggressiveExpire();
+ log.error("Expired: " + expired);
+ String stateAfter = mgr.renderStatusHTML();
+ FileOutputStream fos2 = new FileOutputStream("sessionKeysAfterExpire.html");
+ fos2.write(stateAfter.getBytes());
+ fos2.close();
+ } catch (Throwable t) {
+ log.error("Error loading/storing sessionKeys", t);
+ }
+ try {
+ Thread.sleep(3000);
+ } catch (Throwable t) { // nop
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/crypto/SHA1.java b/src/net/i2p/crypto/SHA1.java
new file mode 100644
index 0000000..35d68a0
--- /dev/null
+++ b/src/net/i2p/crypto/SHA1.java
@@ -0,0 +1,697 @@
+package net.i2p.crypto;
+/* @(#)SHA1.java 1.11 2004-04-26
+ * This file was freely contributed to the LimeWire project and is covered
+ * by its existing GPL licence, but it may be used individually as a public
+ * domain implementation of a published algorithm (see below for references).
+ * It was also freely contributed to the Bitzi public domain sources.
+ * @author Philippe Verdy
+ */
+
+/* Sun may wish to change the following package name, if integrating this
+ * class in the Sun JCE Security Provider for Java 1.5 (code-named Tiger).
+ *
+ * You can include it in your own Security Provider by inserting
+ * this property in your Provider derived class:
+ * put("MessageDigest.SHA-1", "com.bitzi.util.SHA1");
+ */
+//package com.bitzi.util;
+import java.security.*;
+//--+---+1--+---+--2+---+---+3--+---+--4+---+---+5--+---+--6+---+---+7--+---+--
+//34567890123456789012345678901234567890123456789012345678901234567890123456789
+
+/**
+ * The FIPS PUB 180-2 standard specifies four secure hash algorithms (SHA-1,
+ * SHA-256, SHA-384 and SHA-512) for computing a condensed representation of
+ * electronic data (message). When a message of any length < 2^^64 bits (for
+ * SHA-1 and SHA-256) or < 2^^128 bits (for SHA-384 and SHA-512) is input to
+ * an algorithm, the result is an output called a message digest. The message
+ * digests range in length from 160 to 512 bits, depending on the algorithm.
+ * Secure hash algorithms are typically used with other cryptographic
+ * algorithms, such as digital signature algorithms and keyed-hash message
+ * authentication codes, or in the generation of random numbers (bits). The four hash algorithms specified in this "SHS" standard are called
+ * secure because, for a given algorithm, it is computationally infeasible
+ * 1) to find a message that corresponds to a given message digest, or 2)
+ * to find two different messages that produce the same message digest. Any
+ * change to a message will, with a very high probability, result in a
+ * different message digest. This will result in a verification failure when
+ * the secure hash algorithm is used with a digital signature algorithm or a
+ * keyed-hash message authentication algorithm. A "SHS change notice" adds a SHA-224 algorithm for interoperability,
+ * which, like SHA-1 and SHA-256, operates on 512-bit blocks and 32-bit words,
+ * but truncates the final digest and uses distinct initialization values. References: Handles DSA signing and verification of update files.
+ * For convenience this class also makes certain operations available via the
+ * command line. These can be invoked as follows:
+ * Default trusted key generated by jrandom@i2p.net. This can be
+ * authenticated via
+ *
+ * Change Log:
+ *
+ * I am placing this code in the Public Domain. Do with it as you will.
+ * This software comes with no guarantees or warranties but with
+ * plenty of well-wishing instead!
+ * Please visit http://iharder.net/xmlizable
+ * periodically to check for updates or to contribute improvements.
+ * BigInteger that takes advantage of the jbigi library for the modPow operation,
+ * which accounts for a massive segment of the processing cost of asymmetric
+ * crypto. It also takes advantage of the jbigi library for converting a BigInteger
+ * value to a double. Sun's implementation of the 'doubleValue()' method is _very_ lousy.
+ *
+ * The jbigi library itself is basically just a JNI wrapper around the
+ * GMP library - a collection of insanely efficient routines for dealing with
+ * big numbers. If jbigi.enable is set to false, this class won't even attempt to use the
+ * native library, but if it is set to true (or is not specified), it will first
+ * check the platform specific library path for the "jbigi" library, as defined by
+ * {@link Runtime#loadLibrary} - e.g. C:\windows\jbigi.dll or /lib/libjbigi.so, as
+ * well as the CLASSPATH for a resource named 'jbigi'. If that fails, it reviews
+ * the jbigi.impl environment property - if that is set, it checks all of the
+ * components in the CLASSPATH for the file specified and attempts to load it as
+ * the native library. If jbigi.impl is not set, it uses the jcpuid library
+ * described below. If there is still no matching resource, or if that resource
+ * is not a valid OS/architecture specific library, the NativeBigInteger will
+ * revert to using the pure java implementation. When attempting to load the native implementation as a resource from the CLASSPATH,
+ * the NativeBigInteger will make use of the jcpuid component which runs some assembly
+ * code to determine the current CPU implementation, such as "pentium4" or "k623".
+ * We then use that, combined with the OS, to build an optimized resource name - e.g.
+ * "net/i2p/util/libjbigi-freebsd-pentium4.so" or "net/i2p/util/jbigi-windows-k623.dll".
+ * If that resource exists, we use it. If it doesn't (or the jcpuid component fails),
+ * we try a generic native implementation using "none" for the CPU (ala
+ * "net/i2p/util/jbigi-windows-none.dll"). Running this class by itself does a basic unit test and benchmarks the
+ * NativeBigInteger.modPow/doubleValue vs. the BigInteger.modPow/doubleValue by running a 2Kbit op 100
+ * times. At the end of each test, if the native implementation is loaded this will output
+ * something like: If the native implementation is not loaded, it will start by saying: Then go on to run the test, finally outputting: Compare the BigInteger.modPow/doubleValue vs the NativeBigInteger.modPow/doubleValue of some
+ * really big (2Kbit) numbers 100 different times and benchmark the
+ * performance (or shit a brick if they don't match). $dataRoot/db/
), an archive of those signed messages in the
+$dataRoot/archive/
file hierarchy, and an archive of locally
+created but not yet distributed messages in the
+$dataRoot/outbound/
file hierarchy. The contents of
+$dataRoot/archive/
can be wiped out without any loss of
+functionality, though doing so prevents the Syndie instance from sharing
+the authenticated messages with other people.
+
+$dataRoot/archive/
directory, each channel
+has its own subdirectory containing the channel's metadata (in meta.syndie
)
+and posts (in $messageId.syndie
). In addition, there are three
+index files for each channel (index-all.dat, index-new.dat,
+index-unauthorized.dat
) as well as four index files for the
+entire archive itself (index-all.dat, index-meta.dat, index-new.dat,
+index-unauthorized.dat
). These indexes are rebuilt with the
+buildindex command, summarizing all posts
+in the channel/archive (index-all.dat
+and $channelHash/index-all.dat
), all posts received or published
+in the last few days (index-new.dat
and
+$channelHash/index-new.dat
),
+the metadata editions of all known channels (index-meta.dat
),
+and all posts for individual
+channels that are not authorized (index-unauthorized.dat
and
+$channelHash/index-unauthorized.dat
).
+ $channelHash // 32 byte SHA256 value
+ $channelEdition // 4 byte unsigned integer
+ $reciveDate // 4 byte unsigned integer - days since 1970/1/1
+ $metaFileSize // 4 byte unsigned integer - size of the meta.syndie file
+ $numMessages // 4 byte unsigned integer - how many messages are known
+ $indexedMessages // 4 byte unsigned integer - # messages following
+ for (i = 0; i < $indexedMessages; i++)
+ $messageId // 8 byte unsigned integer
+ $receiveDate // 4 byte unsigned integer - days since 1970/1/1
+ $entryFileSize // 4 byte unsigned integer - size of $messageId.syndie
+ $flags // 1 byte.
+ // 1<<7: authorized
+ // 1<<6: private reply
+ // 1<<5: password based encrypted
+ // 1<<4: archive considers the post "new"
+ // external chan refs in index-all and index-new refer to posts that
+ // are in another scope but both target this $channelHash scope and
+ // are authorized. unsigned chan refs in index-unauthorized are the
+ // same, but not authorized
+ $externalChanRefs // 4 byte unsigned integer - # channels following
+ for (i = 0; i < $externalChanRefs; i++)
+ $scopeHash // 32 byte SHA256 value that the post is in
+ $posts // 4 byte unsigned integer - messages following
+ for (j = 0; j < $posts; j++
+ $messageId // 8 byte unsigned integer
+ $receiveDate // 4 byte unsigned integer - days since 1970/1/1
+ $entrySize // 4 byte unsigned integer - sizeof .syndie file
+ $flags // 1 byte.
+ // 1<<7: authorized
+ // 1<<6: private reply
+ // 1<<5: password based encrypted
+ // 1<<4: archive considers the post "new"
+
+
+$dataRoot/archive/$scope/$messageId.syndie
, and metadata under
+$dataRoot/archive/$scope/meta.syndie
. The externally referenced
+posts are found under their original scope path, not the targetted channel
+path - $dataRoot/archive/$scopeHash/$messageId.syndie
and not
+$dataRoot/archive/$channelHash/$messageId.syndie
$dataRoot/archive/index-*dat
files simply concatenate the
+$dataRoot/archive/$channelHash/index-*.dat
files together.
+These file formats are implemented in the syndie.db.ArchiveIndex
+class, and are subject to change.$dataRoot/archive/
+in a webserver and tell people the URL. They will then be able to load up
+their Syndie instance and use the getindex and fetch
+commands to pull posts from the archive into their local Syndie instance.import.cgi
CGI script - simply place the import.cgi
+in the $dataRoot/archive/
directory, mark it as executable and
+tell your webserver to run .cgi
files as CGI scripts. For example,
+the following Apache httpd.conf directives would suffice (assuming your login
+was jrandom
):
+ <Directory /home/jrandom/.syndie/archive/>
+ Options ExecCGI Indexes
+ </Directory>
+ Alias /archive/ "/home/jrandom/.syndie/archive/"
+ AddHandler cgi-script .cgi
+ AddType application/x-syndie .syndie
+
+
+/tmp/cgiImport/
(another directory can be chosen
+by modifying the import.cgi
). In addition, while the CGI allows
+anyone to upload posts by default, you can require a password instead - simply
+set the $requiredPassphrase
in the CGI and share that value with
+those authorized to upload posts. Authorized users will then be able to post
+by providing that value in the --pass $passphrase
parameter for
+put
).
+ login
+ menu syndicate
+ bulkimport --dir /tmp/cgiImport --delete true
+ buildindex
+ exit
+
+
+.syndie
files
+stored in /tmp/cgiImport
, deleting the original files on
+completion. The buildindex
then regenerates the index-*
+files. An example cron
line would be:
+30 * * * * /home/jrandom/syndie/bin/syndie @/home/jrandom/syndie/bin/bulkimport.syndie > /home/jrandom/syndie/import.log
+
+
+
+<form action="import.cgi" method="POST" enctype="multipart/form-data">
+Metadata file: <input type="file" name="meta0" /><br />
+Metadata file: <input type="file" name="meta1" /><br />
+Metadata file: <input type="file" name="meta2" /><br />
+Post file: <input type="file" name="post0" /><br />
+Post file: <input type="file" name="post1" /><br />
+Post file: <input type="file" name="post2" /><br />
+<input type="submit" />
+</form>
+
+file://
schema support, which
+allows only one instance at a time to access the database and loads it into memory.
+The database can be configured for remote access through HSQLDB's
+hsql://hostname:portNum/dbName
or
+hsqls://hostname:portNum/dbName
schema support, offering remote access
+(either directly or over SSL/TLS). To use these alternate schemas, simply use the
+login command with
+--db jdbc:hsqldb:hsqls://127.0.0.1:9999/syndie
(etc) after starting
+up a standalone HSQLDB database configured for remote access.
+
+src/syndie/db/ddl.txt
, and is documented therein. Basically, it has
+tables to contain individual channels, messages within those channels, the content
+stored in those messages (including attachments and references), individual local
+nyms, their keys, and their preferences. In addition, it has general configuration
+data for managing the database and the associated archive of
+.syndie messages.src/syndie/db/ddl_update*.txt
. They are run sequentially to turn
+earlier database schema versions into newer versions of the schema.$major
.$minor
$quality
,
+where $major
indicates a substantial functional change,
+$minor
indicates bugfixes and small improvements, and
+$quality
values of a,b,rc
indicate whether
+a release is alpha quality (substantial changes still in progress,
+liable to break, for geeks only), beta quality (unstable, but
+changes are primarily bugfixes, for geeky testers),
+release candidate quality (final testing before release),
+respectively. Releases without a,b,rc
are stable, production quality
+releases.
+
+
+
+
+syndie-$version.bin.exe
+ (java installer, includes bin/syndie.exe java launcher)
+ built with the "installer-exe" target, but requires
+ "-Dlaunch4jdir=/path/to/launch4j" and
+ "-Dizpackdir=/path/to/izpack_install" (e.g.
+ ant -Dizpackdir=/home/jrandom/IzPack_install/ -Dlaunch4jdir=/home/jrandom/launch4j-3.0.0-pre1-linux/ installer-exe
syndie-$version.bin.zip
+ (no java installer, includes bin/syndie.exe java launcher)
+ built with the "java-package-exe" target, but requires
+ "-Dlaunch4jdir=/path/to/launch4j" (e.g.
+ ant -Dlaunch4jdir=/home/jrandom/launch4j-3.0.0-pre1-linux/ installer-exe
syndie-$version.bin-noexe.zip
+ (no java installer, without bin/syndie.exe java launcher)
+ built with the "java-package" targetsyndie-$version.src.tar.bz2
+ (source package)
+ built with the "source-package" targetdoc/web/dist/
with
+"ant dist
", though you need to include the settings required for
+-Dlaunch4jdir
and -Dizpackdir
make -f Makefile.nix syndie
" creates ./syndie, and
+"make -f Makefile.nix package
" creates a syndie-native.tar.bz2,
+which is just like syndie-$version.bin.zip, except bin/syndie is the native
+executable instead of a shell script launching java. Work is ongoing for
+GCJ/MinGW support, but the Makefile.mingw should work with a viable MinGW
+install of GCJ 4.xjrandom
. Syndie is being built as part of
+I2P's development efforts, and your
+generous donations help provide jrandom's very modest cost of living,
+as well as development servers and hosting for Syndie and I2P (coming to
+approximately $500USD/month
).
+
+
+
+
+Source:
+ (*nix and OS X users, run the executable by typing
+ "java -jar syndie-1.000a.bin.exe
".
+ yes, really, you want the .exe)
+
+
+393F2DF9
(fingerprint AE89 D080 0E85 72F0 B777 B2ED C2FA 68C0 393F 2DF9
)
+
+
+mkdir syndie-dev ; cd syndie-dev ; darcs initialize ; darcs pull http://syndie.i2p.net/darcs/
(primary sources)
+ darcs record -m "change stuff" -A my@email.addr
+ darcs send -o myfile.darcs --sign
+ mail syndie-darcs at i2p dot net < myfile.darcs
+ If approved, they will be applied to the http://syndie.i2p.net/darcs/
+ archive.
+ darcs apply --verify=syndie.pubring myfile.darcs
+ That applies the patch if it was signed by one of the syndie developers.exe
installer, simply
+launch the included uninstaller. Otherwise, just remove the directory you
+installed Syndie into ($HOME/syndie
or C:\syndie
).
+The Syndie content is stored in $HOME/.syndie
by default, so
+you should delete that directory as well if you want to remove the
+content (and keys).$HOME/.syndie
directory.$HOME/syndie/bin/syndie /another/path
). In addition, you can have
+many different Syndie nyms within a single Syndie instance (see the "login
"
+and "register
" commands).bin/syndie
script to reference the local hsqldb.jar.
+
+
+
+
+
+What license is Syndie released under? (up)
+
+
+
+
+
+
+
+
+
+ login
+ menu post
+ create --channel 0000000000000000000000000000000000000000
+ addpage --in /etc/motd --content-type text/plain
+ addattachment --in ~/webcam.png --content-type image/png
+ listauthkeys --authorizedOnly true
+ authenticate 0
+ authorize 0
+ set --subject "Today's MOTD"
+ set --publicTags motd
+ execute
+ exit
+
+ cat motd-script | ./syndie > syndie.log
+
+
+syndie [@script] [data_root]
@script
parameter reads in the contents of the
+script
file, running them as if they came from the standard input.
+The optional data_root
parameter tells Syndie where to locate the
+database, archive, and related data files. If not specified, it uses
+$HOME/.syndie/
(or %HOME%\.syndie
on windows).//
.
+
+
+
+An example script:
+
+
login [--db $jdbcURL] [--login $loginName --pass $password]
register [--db $jdbcURL] --login $loginName --pass $password --name $name
restore --in $file [--db $jdbcURL]
+
logout
exit
up
togglePaginate
toggleDebug
prefs [--debug $boolean] [--paginate $boolean] [--httpproxyhost $hostname --httpproxyport $portNum] [--archive $archiveURL]
prefs
is called with no arguments,
+ then the preferences are simply displayed and not updated.import --in $filename
.syndie
file (either a metadata message or
+ a post). Alternately, it can import key files generated by
+ keygen
.keygen --type (read|manage|post|reply) [--scope $channelHash] (--pubOut $publicKeyFile --privOut $privateKeyFile | --sessionOut $sessionKeyFile)
--scope
parameter is just an informational field
+ included in the key files so that on import, they
+ can be used appropriately.version
?
help
sql $sqlQuery
(advanced)init $jdbcURL
(advanced)backup --out $file [--includeArchive $boolean]
builduri --url http://foo/bar
builduri --channel $chanHash [--message $messageId [--page $pageNum]]
builduri --archive $url [--password $pass]
history
!!
!$num
!-$num
^a[^b]
a
with b
in
+ the previous command, and run it. If ^b
is not specified,
+ the first occurrence of a
is removed.alias [foo $bar]
"alias bugs menu read; threads --channel all --tags syndie,bug,-wontfix,-closed,-worksforme,-claim"
.
+ Aliases work in all menu contexts, and are run after attempting to interpret
+ the command as a normal instruction - meaning you cannot effectively override
+ existing commands with aliases.
+
channels [--unreadOnly $boolean] [--name $name] [--hash $hashPrefix]
next [--lines $num]
prev [--lines $num]
meta [--channel ($index|$hash)]
messages --channel ($index|$hash) [--includeUnauthorized $boolean] [--includeUnauthenticated $boolean]
threads [--channel ($index|$hash|all)] [--tags [-]tag[,[-]tag]*] [--includeUnauthorized $boolean] [--compact $boolean]
-
. The
+ display can be fairly verbose or it can be compact (limiting the output
+ to one line per thread). If called with no arguments, then it just
+ displays the last set of matching threads again.view (--message ($index|$uri)|--thread $index) [--page $n]
threadnext [--position $position]
threadprev [--position $position]
importkey --position $position
export [--message ($index|$uri)] --out $directory
save [--message ($index|$uri)] (--page $n|--attachment $n) --out $filename
reply
ban [--scope (author|channel|$hash)] [--delete $boolean]
decrypt [(--message $msgId|--channel $channelId)] [--passphrase pass]
watch (author|channel) [--nickname $name] [--category $nameInTree]
+
channels
next [--lines $num]
prev [--lines $num]
meta [--channel ($index|$hash)]
create
update (--channel $index|$hash)
set [$option=$value]*
set --name $channelName
set --description $desc
set --avatar $filename
set --edition $editionNum
set --expiration $yyyyMMdd
set --publicPosting $boolean
set --publicReplies $boolean
set --pubTag [$tag[,$tag]*]
set --privTag [$tag[,$tag]*]
set --refs $filename
[[\t]*$name\t$uri\t$refType\t$description\n]*
.
+ The tab indentation at the beginning of the line determines the tree structure,
+ and blank values are allowed for various fields. set --pubArchive [$syndieURI[,$syndieURI]*]
set --privArchive [$syndieURI[,$syndieURI]*]
set --encryptContent $boolean
set --bodyPassphrase $passphrase
set --bodyPassphrasePrompt $prompt
set --bodyPassphrasePrompt "1+1" --bodyPassphrase "2"
listnyms [--name $namePrefix] [--channel $hashPrefix]
addnym (--nym $index|--key $base64(pubKey)) --action (manage|post)
removenym (--nym $index|--key $base64(pubKey)) --action (manage|post)
preview
execute
cancel
+
channels [--capability (manage|post)] [--name $name] [--hash $prefix]
next [--lines $num]
prev [--lines $num]
meta [--channel ($index|$hash)]
create --channel ($index|$hash)
addpage [--page $num] --in ($filename|stdin) [--type $contentType]
--in
parameter, the content is read from the standard
+ input until terminated with a line containing only a single ".". The
+ newlines are stripped on each line so that it ends with "\n" for all
+ users, regardless of whether their OS uses "\n", "\r\n", or "\r" for line
+ terminators.listpages
delpage $index
addattachment [--attachment $num] --in $filename [--type $contentType] [--name $name] [--description $desc]
listattachments
delattachment $index
listkeys [--scope $scope] [--type $type]
addref
addref --in $filename
addref [--name $name] --uri $uri [--reftype $type] [--description $desc]
addref --readkey $keyHash --scope $scope [--name $name] [--description $desc]
addref --postkey $keyHash --scope $scope [--name $name] [--description $desc]
addref --managekey $keyHash --scope $scope [--name $name] [--description $desc]
addref --replykey $keyHash --scope $scope [--name $name] [--description $desc]
listrefs
delref $index
addparent --uri $uri
listparents
delparent $index
listauthkeys [--authorizedonly $boolean]
authenticate $index
listreadkeys
set --readkey (public|$index|pbe --passphrase $pass --prompt $prompt)
public
, create a random key and publicize it in the post's
+ publicly readable headers. if pbe, then derive a read key from the
+ passphrase, publicizing the prompt in the public headers. Otherwise,
+ use the indexed channel read key//set --cancel $uri[,$uri]*
set --messageId ($id|date)
set --subject $subject
set --avatar $filename
set --encryptToReply $boolean
set --overwrite $uri
set --expiration ($yyyyMMdd|none)
set --forceNewThread $boolean
set --refuseReplies $boolean
preview [--page $n]
execute
cancel
+
buildindex
getindex --archive $url [--proxyHost $host --proxyPort $port] [--pass $pass]
+ [--scope (all|new|meta|unauth)] [--channel $chan]
diff
fetch [--style (diff|known|metaonly|pir|unauth)] [--includeReplies $boolean]
(nextpbe|prevpbe) [--lines $num]
resolvepbe --index $num --passphrase $passphrase
schedule --put (outbound|outboundmeta|archive|archivemeta)
+ [--deleteOutbound $boolean] [--knownChanOnly $boolean]
post [--postURL $url] [--passphrase $pass]
meta$n
+ and post$n
, where n >= 0. If the passphrase is specified, it
+ includes pass=$pass
. Perhaps later that will switch to
+ base64(HMAC($YYYYMMDD,PBE($password))) so it can be slightly secure even
+ in absence of TLS/etc?bulkimport --dir $directory --delete $boolean
listban
unban [--scope $index|$chanHash]
login
+ menu post
+ create --channel 0000000000000000000000000000000000000000
+ addpage --in /etc/motd --content-type text/plain
+ addattachment --in ~/public_html/webcam.png --content-type image/png --name cam.png
+ listauthkeys --authorizedOnly true
+ authenticate 0
+ authorize 0
+ set --subject "Today's MOTD"
+ set --publicTags motd
+ execute
+
+
+
+Relationship between Syndie and:
+
+
+I2P (up)
+
+
+
+Tor (up)
+
+
+
+Freenet (up)
+
+
+
+Usenet (up)
+
+
+
+OpenDHT (up)
+
+
+
+Feedspace (up)
+
+
+
+Feedtree (up)
+
+
+
+Eternity Service (up)
+
+
+
+PGP/GPG (up)
+
+
+
+
+
+
+Subsequent releases will improve Syndie's capabilities across several dimensions:
+.syndie
files), and HTTP
+ syndication to public Syndie archives through
+ the (scriptable) text interface..syndie
file format
+ and encryption algorithms, Syndie URIs, and the
+ database schema, allowing extensions and alternate
+ implementations by third parties
+
+
+
+
+
Syndie messages (up)
+A .syndie
file contains signed and potentially encrypted data for
+passing Syndie channel metadata and posts around. It is made up of two parts- a
+UTF-8 encoded header and a body. The header begins with a type line, followed by
+name=value pairs, delimited by the newline character ('\n' or 0x0A). After
+the pairs are complete, a blank newline is included, followed by the line
+"Size=$numBytes\n", where $numBytes is the size of the body (base10). After that comes
+that many bytes making up the body of the enclosed message, followed by two
+newline delimited signature lines - AuthorizationSig=$signature and
+AuthenticationSig=$signature. There can be any arbitrary amount of data after
+the signature lines, but it is not currently interpreted.
+
+ rand(nonzero) padding + 0 + internalSize + totalSize + data + rand
+
+
+
+
+headers.dat
[used in: posts, private messages, metadata posts]page$n.dat
[used in: posts, private messages]page$n.cfg
[used in: posts, private messages]attach$n.dat
[used in: posts, private messages]attach$n.cfg
[used in: posts, private messages]avatar32.png
[used in: posts, private messages, metadata posts]references.cfg
[used in: posts, private messages, metadata posts]Syndie key files (up)
+
+keytype: [manage|manage-pub|reply|reply-pub|post|post-pub|read]\n
+scope: $base64(channelHash)\n
+raw: $base64(bytes)\n
+
+
+Syndie URIs (up)
+
+Type: url
+Attributes:
+* net: what network the URL is on, such as "i2p", "tor", "ip", or "freenet" (string)
+* url: network-specific URL (string)
+* name: [optional] short name of the resource referenced (string)
+* desc: [optional] longer description of the resource (string)
+* tag: [optional] list of tags attached to the reference (string[])
+
+Type: channel
+Attributes:
+* channel: [1] base64 of the SHA256 of the channel's identity public key (string)
+* author: [1] base64 of the SHA256 of the author's identity public key, if different from the channel (string)
+* msgId: [1] unique identifier within the channel's scope (or author's scope, if specified) (integer)
+* page: [optional] page within the message's scope (integer)
+* attachment: [optional] attachment within the message's scope (integer)
+* readKeyType: [optional] describes the readKey, e.g. "AES256" (string)
+* readKeyData: [optional] base64 of the key required to read posts in the channel [string)
+* postKeyType: [optional] describes the postKey, e.g. "DSA1024" (string)
+* postKeyData: [optional] base64 of the private key required to post to the channel (string)
+* name: [optional] short name of the resource referenced (string)
+* desc: [optional] longer description of the resource (string)
+* tag: [optional] list of tags attached to the reference (string[])
+
+[1] If the field is not specified, it must be implicitly derived from the context.
+ For instance, a syndie post may omit the channel and msgId when referring to another
+ page or attachment on the current message.
+
+Type: search
+Attributes:
+* channel: [optional] base64 of the SHA256 of the channel's identity public key (string)
+* author: [optional] base64 of the SHA256 of the author's identity public key (string)
+* tag: [optional] list of tags to match (string[])
+* keyword: [optional] list of keywords to match (string[])
+* age: [optional] number of days in the past to look back (integer)
+* status: [optional] channels to look in- "new", "watched", "all" (string)
+
+Type: archive
+Attributes:
+* net: what network the URL is on, such as "i2p", "tor", "ip", or "freenet" (string)
+* url: network-specific URL (string)
+* readKeyType: [optional] describes the readKey, e.g. "AES256" (string)
+* readKeyData: [optional] base64 of the key required to pull data from the archive (string)
+* postKeyType: [optional] describes the postKey, e.g. "AES256" (string)
+* postKeyData: [optional] base64 of the key required to pull data from the archive (string)
+* identKeyType: [optional] describes the identKey, e.g. "DSA1024" (string)
+* identKeyData: [optional] base64 of the key the archive will identify themselves as (string)
+* name: [optional] short name of the resource referenced (string)
+* desc: [optional] longer description of the resource (string)
+* tag: [optional] list of tags attached to the reference (string[])
+
+Type: text
+Attributes:
+* name: [optional] short name of the freeform text reference (string)
+* body: [optional] freeform text reference (string)
+* tag: [optional] list of tags attached to the reference (string[])
+
+The canonical encoding is: "urn:syndie:$refType:$bencodedAttributes",
+with $refType being one of the five types above, and $bencodedAttributes
+being the bencoded attributes. Strings are UTF-8, and the bencoded attributes
+are ordered according to the UK locale (in the canonical form).
+
+Examples:
+ urn:syndie:url:d3:url19:http://www.i2p.net/e
+ urn:syndie:channel:d7:channel40:12345678901234567890123456789012345678909:messageIdi42e4pagei0ee
+ urn:syndie:channel:d10:attachmenti3ee
+ urn:syndie:channel:d4:pagei2ee
+ urn:syndie:search:d3:tag3i2pe
+ urn:syndie:search:d6:status7:watchede
+
+Within syndie-enabled apps, the urn:syndie: prefix can be dropped:
+ url:d3:url19:http://www.i2p.net/e
+ channel:d7:channel40:12345678901234567890123456789012345678909:messageIdi42e4pagei0ee
+ channel:d10:attachmenti3ee
+ channel:d4:pagei2ee
+ search:d3:tag3i2pe
+ search:d6:status7:watchede
+
+
+Syndie message headers (up)
+headers.dat
zip headers,
+ rather than in the unencrypted publicly visible headers. Allow on posts
+ means the header can be used on normal posts. Allow on private messages
+ means the header can be used on posts encrypted to a channel's private key.
+ Allow on metadata messages means the header can be used on metadata
+ messages configuring a channel.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
base64
, the content is base64 encoded
+with an alternate alphabet. The alphabet is the standard one except with
+"~" replacing "/", and "+" with "-" (for safer URL and file name encoding).
+Syndie use cases (aka "why you would use Syndie")
+
+
+Decentralized forum (up)
+
+
+ *
+ *
+ * Modified by jrandom@i2p.net to remove unnecessary gnu-crypto dependencies, and
+ * renamed from Sha256 to avoid conflicts with JVMs using gnu-crypto as their JCE
+ * provider.
+ *
+ * @version $Revision: 1.3 $
+ */
+public class Sha256Standalone extends BaseHashStandalone {
+ // Constants and variables
+ // -------------------------------------------------------------------------
+ private static final int[] k = {
+ 0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5,
+ 0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5,
+ 0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3,
+ 0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174,
+ 0xe49b69c1, 0xefbe4786, 0x0fc19dc6, 0x240ca1cc,
+ 0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da,
+ 0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7,
+ 0xc6e00bf3, 0xd5a79147, 0x06ca6351, 0x14292967,
+ 0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13,
+ 0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85,
+ 0xa2bfe8a1, 0xa81a664b, 0xc24b8b70, 0xc76c51a3,
+ 0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070,
+ 0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5,
+ 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3,
+ 0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208,
+ 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2
+ };
+
+ private static final int BLOCK_SIZE = 64; // inner block size in bytes
+
+ private static final String DIGEST0 =
+ "BA7816BF8F01CFEA414140DE5DAE2223B00361A396177A9CB410FF61F20015AD";
+
+ private static final int[] w = new int[64];
+
+ /** caches the result of the correctness test, once executed. */
+ private static Boolean valid;
+
+ /** 256-bit interim result. */
+ private int h0, h1, h2, h3, h4, h5, h6, h7;
+
+ // Constructor(s)
+ // -------------------------------------------------------------------------
+
+ /** Trivial 0-arguments constructor. */
+ public Sha256Standalone() {
+ super("sha256/standalone", 32, BLOCK_SIZE);
+ }
+
+ /**
+ *
+ *
+ *
+ *
+ *
+ *
+ * Modified by jrandom for I2P to use a standalone gnu-crypto SHA256, Cryptix's AES,
+ * to strip out some unnecessary dependencies and increase the buffer size.
+ * Renamed from Fortuna to FortunaStandalone so it doesn't conflict with the
+ * gnu-crypto implementation, which has been imported into GNU/classpath
+ *
+ */
+public class FortunaStandalone extends BasePRNGStandalone implements Serializable, RandomEventListenerStandalone
+{
+
+ private static final long serialVersionUID = 0xFACADE;
+
+ private static final int SEED_FILE_SIZE = 64;
+ static final int NUM_POOLS = 32;
+ static final int MIN_POOL_SIZE = 64;
+ final Generator generator;
+ final Sha256Standalone[] pools;
+ long lastReseed;
+ int pool;
+ int pool0Count;
+ int reseedCount;
+ static long refillCount = 0;
+ static long lastRefill = System.currentTimeMillis();
+
+ public static final String SEED = "gnu.crypto.prng.fortuna.seed";
+
+ public FortunaStandalone()
+ {
+ super("Fortuna i2p");
+ generator = new Generator();
+ pools = new Sha256Standalone[NUM_POOLS];
+ for (int i = 0; i < NUM_POOLS; i++)
+ pools[i] = new Sha256Standalone();
+ lastReseed = 0;
+ pool = 0;
+ pool0Count = 0;
+ allocBuffer();
+ }
+ protected void allocBuffer() {
+ buffer = new byte[4*1024*1024]; //256]; // larger buffer to reduce churn
+ }
+
+ public void seed(byte val[]) {
+ Map props = new HashMap(1);
+ props.put(SEED, (Object)val);
+ init(props);
+ fillBlock();
+ }
+
+ public void setup(Map attributes)
+ {
+ lastReseed = 0;
+ reseedCount = 0;
+ pool = 0;
+ pool0Count = 0;
+ generator.init(attributes);
+ }
+
+ public void fillBlock()
+ {
+ long start = System.currentTimeMillis();
+ if (pool0Count >= MIN_POOL_SIZE
+ && System.currentTimeMillis() - lastReseed > 100)
+ {
+ reseedCount++;
+ //byte[] seed = new byte[0];
+ for (int i = 0; i < NUM_POOLS; i++)
+ {
+ if (reseedCount % (1 << i) == 0) {
+ generator.addRandomBytes(pools[i].digest());
+ }
+ }
+ lastReseed = System.currentTimeMillis();
+ }
+ generator.nextBytes(buffer);
+ long now = System.currentTimeMillis();
+ long diff = now-lastRefill;
+ lastRefill = now;
+ long refillTime = now-start;
+ System.out.println("Refilling " + (++refillCount) + " after " + diff + " for the PRNG took " + refillTime);
+ }
+
+ public void addRandomByte(byte b)
+ {
+ pools[pool].update(b);
+ if (pool == 0)
+ pool0Count++;
+ pool = (pool + 1) % NUM_POOLS;
+ }
+
+ public void addRandomBytes(byte[] buf, int offset, int length)
+ {
+ pools[pool].update(buf, offset, length);
+ if (pool == 0)
+ pool0Count += length;
+ pool = (pool + 1) % NUM_POOLS;
+ }
+
+ public void addRandomEvent(RandomEventStandalone event)
+ {
+ if (event.getPoolNumber() < 0 || event.getPoolNumber() >= pools.length)
+ throw new IllegalArgumentException("pool number out of range: "
+ + event.getPoolNumber());
+ pools[event.getPoolNumber()].update(event.getSourceNumber());
+ pools[event.getPoolNumber()].update((byte) event.getData().length);
+ byte data[] = event.getData();
+ pools[event.getPoolNumber()].update(data, 0, data.length); //event.getData());
+ if (event.getPoolNumber() == 0)
+ pool0Count += event.getData().length;
+ }
+
+ // Reading and writing this object is equivalent to storing and retrieving
+ // the seed.
+
+ private void writeObject(ObjectOutputStream out) throws IOException
+ {
+ byte[] seed = new byte[SEED_FILE_SIZE];
+ generator.nextBytes(seed);
+ out.write(seed);
+ }
+
+ private void readObject(ObjectInputStream in) throws IOException
+ {
+ byte[] seed = new byte[SEED_FILE_SIZE];
+ in.readFully(seed);
+ generator.addRandomBytes(seed);
+ }
+
+ /**
+ * The Fortuna generator function. The generator is a PRNG in its own
+ * right; Fortuna itself is basically a wrapper around this generator
+ * that manages reseeding in a secure way.
+ */
+ public static class Generator extends BasePRNGStandalone implements Cloneable
+ {
+
+ private static final int LIMIT = 1 << 20;
+
+ private final Sha256Standalone hash;
+ private final byte[] counter;
+ private final byte[] key;
+ /** current encryption key built from the keying material */
+ private Object cryptixKey;
+ private CryptixAESKeyCache.KeyCacheEntry cryptixKeyBuf;
+ private boolean seeded;
+
+ public Generator ()
+ {
+ super("Fortuna.generator.i2p");
+ this.hash = new Sha256Standalone();
+ counter = new byte[16]; //cipher.defaultBlockSize()];
+ buffer = new byte[16]; //cipher.defaultBlockSize()];
+ int keysize = 32;
+ key = new byte[keysize];
+ cryptixKeyBuf = CryptixAESKeyCache.createNew();
+ }
+
+ public final byte nextByte()
+ {
+ byte[] b = new byte[1];
+ nextBytes(b, 0, 1);
+ return b[0];
+ }
+
+ public final void nextBytes(byte[] out, int offset, int length)
+ {
+ if (!seeded)
+ throw new IllegalStateException("generator not seeded");
+
+ int count = 0;
+ do
+ {
+ int amount = Math.min(LIMIT, length - count);
+ super.nextBytes(out, offset+count, amount);
+ count += amount;
+
+ for (int i = 0; i < key.length; i += counter.length)
+ {
+ //fillBlock(); // inlined
+ CryptixRijndael_Algorithm.blockEncrypt(counter, buffer, 0, 0, cryptixKey);
+ incrementCounter();
+ int l = Math.min(key.length - i, 16);//cipher.currentBlockSize());
+ System.arraycopy(buffer, 0, key, i, l);
+ }
+ resetKey();
+ }
+ while (count < length);
+ //fillBlock(); // inlined
+ CryptixRijndael_Algorithm.blockEncrypt(counter, buffer, 0, 0, cryptixKey);
+ incrementCounter();
+ ndx = 0;
+ }
+
+ public final void addRandomByte(byte b)
+ {
+ addRandomBytes(new byte[] { b });
+ }
+
+ public final void addRandomBytes(byte[] seed, int offset, int length)
+ {
+ hash.update(key, 0, key.length);
+ hash.update(seed, offset, length);
+ byte[] newkey = hash.digest();
+ System.arraycopy(newkey, 0, key, 0, Math.min(key.length, newkey.length));
+ //hash.doFinal(key, 0);
+ resetKey();
+ incrementCounter();
+ seeded = true;
+ }
+
+ public final void fillBlock()
+ {
+ ////i2p: this is not being checked as a microoptimization
+ //if (!seeded)
+ // throw new IllegalStateException("generator not seeded");
+ CryptixRijndael_Algorithm.blockEncrypt(counter, buffer, 0, 0, cryptixKey);
+ incrementCounter();
+ }
+
+ public void setup(Map attributes)
+ {
+ seeded = false;
+ Arrays.fill(key, (byte) 0);
+ Arrays.fill(counter, (byte) 0);
+ byte[] seed = (byte[]) attributes.get(SEED);
+ if (seed != null)
+ addRandomBytes(seed);
+ }
+
+ /**
+ * Resets the cipher's key. This is done after every reseed, which
+ * combines the old key and the seed, and processes that throigh the
+ * hash function.
+ */
+ private final void resetKey()
+ {
+ try {
+ cryptixKey = CryptixRijndael_Algorithm.makeKey(key, 16, cryptixKeyBuf);
+ } catch (InvalidKeyException ike) {
+ throw new Error("hrmf", ike);
+ }
+ }
+
+ /**
+ * Increment `counter' as a sixteen-byte little-endian unsigned integer
+ * by one.
+ */
+ private final void incrementCounter()
+ {
+ for (int i = 0; i < counter.length; i++)
+ {
+ counter[i]++;
+ if (counter[i] != 0)
+ break;
+ }
+ }
+ }
+
+ public static void main(String args[]) {
+ byte in[] = new byte[16];
+ byte out[] = new byte[16];
+ byte key[] = new byte[32];
+ try {
+ CryptixAESKeyCache.KeyCacheEntry buf = CryptixAESKeyCache.createNew();
+ Object cryptixKey = CryptixRijndael_Algorithm.makeKey(key, 16, buf);
+ long beforeAll = System.currentTimeMillis();
+ for (int i = 0; i < 256; i++) {
+ //long before =System.currentTimeMillis();
+ for (int j = 0; j < 1024; j++)
+ CryptixRijndael_Algorithm.blockEncrypt(in, out, 0, 0, cryptixKey);
+ //long after = System.currentTimeMillis();
+ //System.out.println("encrypting 16KB took " + (after-before));
+ }
+ long after = System.currentTimeMillis();
+ System.out.println("encrypting 4MB took " + (after-beforeAll));
+ } catch (Exception e) { e.printStackTrace(); }
+
+ try {
+ CryptixAESKeyCache.KeyCacheEntry buf = CryptixAESKeyCache.createNew();
+ Object cryptixKey = CryptixRijndael_Algorithm.makeKey(key, 16, buf);
+ byte data[] = new byte[4*1024*1024];
+ long beforeAll = System.currentTimeMillis();
+ //CryptixRijndael_Algorithm.ecbBulkEncrypt(data, data, cryptixKey);
+ long after = System.currentTimeMillis();
+ System.out.println("encrypting 4MB took " + (after-beforeAll));
+ } catch (Exception e) { e.printStackTrace(); }
+ /*
+ FortunaStandalone f = new FortunaStandalone();
+ java.util.HashMap props = new java.util.HashMap();
+ byte initSeed[] = new byte[1234];
+ new java.util.Random().nextBytes(initSeed);
+ long before = System.currentTimeMillis();
+ props.put(SEED, (byte[])initSeed);
+ f.init(props);
+ byte buf[] = new byte[8*1024];
+ for (int i = 0; i < 64*1024; i++) {
+ f.nextBytes(buf);
+ }
+ long time = System.currentTimeMillis() - before;
+ System.out.println("512MB took " + time + ", or " + (8*64d)/((double)time/1000d) +"MBps");
+ */
+ }
+}
diff --git a/src/gnu/crypto/prng/IRandomStandalone.java b/src/gnu/crypto/prng/IRandomStandalone.java
new file mode 100644
index 0000000..3a370af
--- /dev/null
+++ b/src/gnu/crypto/prng/IRandomStandalone.java
@@ -0,0 +1,186 @@
+package gnu.crypto.prng;
+
+// ----------------------------------------------------------------------------
+// $Id: IRandomStandalone.java,v 1.1 2006-07-04 16:18:04 jrandom Exp $
+//
+// Copyright (C) 2001, 2002, 2003 Free Software Foundation, Inc.
+//
+// This file is part of GNU Crypto.
+//
+// GNU Crypto is free software; you can redistribute it and/or modify
+// it under the terms of the GNU General Public License as published by
+// the Free Software Foundation; either version 2, or (at your option)
+// any later version.
+//
+// GNU Crypto is distributed in the hope that it will be useful, but
+// WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+// General Public License for more details.
+//
+// You should have received a copy of the GNU General Public License
+// along with this program; see the file COPYING. If not, write to the
+//
+// Free Software Foundation Inc.,
+// 51 Franklin Street, Fifth Floor,
+// Boston, MA 02110-1301
+// USA
+//
+// Linking this library statically or dynamically with other modules is
+// making a combined work based on this library. Thus, the terms and
+// conditions of the GNU General Public License cover the whole
+// combination.
+//
+// As a special exception, the copyright holders of this library give
+// you permission to link this library with independent modules to
+// produce an executable, regardless of the license terms of these
+// independent modules, and to copy and distribute the resulting
+// executable under terms of your choice, provided that you also meet,
+// for each linked independent module, the terms and conditions of the
+// license of that module. An independent module is a module which is
+// not derived from or based on this library. If you modify this
+// library, you may extend this exception to your version of the
+// library, but you are not obligated to do so. If you do not wish to
+// do so, delete this exception statement from your version.
+// ----------------------------------------------------------------------------
+
+import java.util.Map;
+
+/**
+ *
+ *
+ *
+ * L
bits of an output sequence S
,
+ * can predict the (L+1)
st bit of S
with a
+ * probability significantly grater than 1/2
."
+ *
+ *
+ * @version $Revision: 1.1 $
+ */
+public interface IRandomStandalone extends Cloneable {
+
+ // Constants
+ // -------------------------------------------------------------------------
+
+ // Methods
+ // -------------------------------------------------------------------------
+
+ /**
+ *
+ * CRC Press, Inc. ISBN 0-8493-8523-7, 1997
+ * Menezes, A., van Oorschot, P. and S. Vanstone.offset
, for a maximum of length
bytes with the
+ * output of this generator instance.
+ *
+ * @param out the placeholder to contain the generated random bytes.
+ * @param offset the starting index in out to consider. This method
+ * does nothing if this parameter is not within 0
and
+ * out.length
.
+ * @param length the maximum number of required random bytes. This method
+ * does nothing if this parameter is less than 1
.
+ * @exception IllegalStateException if the instance is not yet initialised.
+ * @exception LimitLimitReachedExceptionStandalonehis instance has reached its
+ * theoretical limit for generating non-repetitive pseudo-random data.
+ */
+ void nextBytes(byte[] out, int offset, int length)
+ throws IllegalStateException, LimitReachedExceptionStandalone;
+
+ /**
+ *
+ * private static final Log _log = new Log(someClass.class);
+ *
+ * It is for this reason that applications that care about working with multiple
+ * contexts should build their own context as soon as possible (within the main(..))
+ * so that any referenced components will latch on to that context instead of
+ * instantiating a new one. However, there are situations in which both can be
+ * relevent.
+ *
+ */
+public class I2PAppContext {
+ /** the context that components without explicit root are bound */
+ protected static I2PAppContext _globalAppContext;
+
+ private Properties _overrideProps;
+
+ private StatManager _statManager;
+ private SessionKeyManager _sessionKeyManager;
+ private NamingService _namingService;
+ private PetNameDB _petnameDb;
+ private ElGamalEngine _elGamalEngine;
+ private ElGamalAESEngine _elGamalAESEngine;
+ private AESEngine _AESEngine;
+ private LogManager _logManager;
+ private HMACGenerator _hmac;
+ private HMAC256Generator _hmac256;
+ private SHA256Generator _sha;
+ private Clock _clock;
+ private DSAEngine _dsa;
+ private RoutingKeyGenerator _routingKeyGenerator;
+ private RandomSource _random;
+ private KeyGenerator _keyGenerator;
+ private volatile boolean _statManagerInitialized;
+ private volatile boolean _sessionKeyManagerInitialized;
+ private volatile boolean _namingServiceInitialized;
+ private volatile boolean _petnameDbInitialized;
+ private volatile boolean _elGamalEngineInitialized;
+ private volatile boolean _elGamalAESEngineInitialized;
+ private volatile boolean _AESEngineInitialized;
+ private volatile boolean _logManagerInitialized;
+ private volatile boolean _hmacInitialized;
+ private volatile boolean _hmac256Initialized;
+ private volatile boolean _shaInitialized;
+ private volatile boolean _clockInitialized;
+ private volatile boolean _dsaInitialized;
+ private volatile boolean _routingKeyGeneratorInitialized;
+ private volatile boolean _randomInitialized;
+ private volatile boolean _keyGeneratorInitialized;
+
+ /**
+ * Pull the default context, creating a new one if necessary, else using
+ * the first one created.
+ *
+ */
+ public static I2PAppContext getGlobalContext() {
+ synchronized (I2PAppContext.class) {
+ if (_globalAppContext == null) {
+ _globalAppContext = new I2PAppContext(false, null);
+ }
+ }
+ return _globalAppContext;
+ }
+
+ /**
+ * Lets root a brand new context
+ *
+ */
+ public I2PAppContext() {
+ this(true, null);
+ }
+ /**
+ * Lets root a brand new context
+ *
+ */
+ public I2PAppContext(Properties envProps) {
+ this(true, envProps);
+ }
+ /**
+ * @param doInit should this context be used as the global one (if necessary)?
+ */
+ private I2PAppContext(boolean doInit, Properties envProps) {
+ if (doInit) {
+ synchronized (I2PAppContext.class) {
+ if (_globalAppContext == null)
+ _globalAppContext = this;
+ }
+ }
+ _overrideProps = envProps;
+ _statManager = null;
+ _sessionKeyManager = null;
+ _namingService = null;
+ _petnameDb = null;
+ _elGamalEngine = null;
+ _elGamalAESEngine = null;
+ _logManager = null;
+ _statManagerInitialized = false;
+ _sessionKeyManagerInitialized = false;
+ _namingServiceInitialized = false;
+ _elGamalEngineInitialized = false;
+ _elGamalAESEngineInitialized = false;
+ _logManagerInitialized = false;
+ }
+
+ /**
+ * Access the configuration attributes of this context, using properties
+ * provided during the context construction, or falling back on
+ * System.getProperty if no properties were provided during construction
+ * (or the specified prop wasn't included).
+ *
+ */
+ public String getProperty(String propName) {
+ if (_overrideProps != null) {
+ if (_overrideProps.containsKey(propName))
+ return _overrideProps.getProperty(propName);
+ }
+ return System.getProperty(propName);
+ }
+
+ /**
+ * Access the configuration attributes of this context, using properties
+ * provided during the context construction, or falling back on
+ * System.getProperty if no properties were provided during construction
+ * (or the specified prop wasn't included).
+ *
+ */
+ public String getProperty(String propName, String defaultValue) {
+ if (_overrideProps != null) {
+ if (_overrideProps.containsKey(propName))
+ return _overrideProps.getProperty(propName, defaultValue);
+ }
+ return System.getProperty(propName, defaultValue);
+ }
+ /**
+ * Access the configuration attributes of this context, listing the properties
+ * provided during the context construction, as well as the ones included in
+ * System.getProperties.
+ *
+ * @return set of Strings containing the names of defined system properties
+ */
+ public Set getPropertyNames() {
+ Set names = new HashSet(System.getProperties().keySet());
+ if (_overrideProps != null)
+ names.addAll(_overrideProps.keySet());
+ return names;
+ }
+
+ /**
+ * The statistics component with which we can track various events
+ * over time.
+ */
+ public StatManager statManager() {
+ if (!_statManagerInitialized) initializeStatManager();
+ return _statManager;
+ }
+ private void initializeStatManager() {
+ synchronized (this) {
+ if (_statManager == null)
+ _statManager = new StatManager(this);
+ _statManagerInitialized = true;
+ }
+ }
+
+ /**
+ * The session key manager which coordinates the sessionKey / sessionTag
+ * data. This component allows transparent operation of the
+ * ElGamal/AES+SessionTag algorithm, and contains all of the session tags
+ * for one particular application. If you want to seperate multiple apps
+ * to have their own sessionTags and sessionKeys, they should use different
+ * I2PAppContexts, and hence, different sessionKeyManagers.
+ *
+ */
+ public SessionKeyManager sessionKeyManager() {
+ if (!_sessionKeyManagerInitialized) initializeSessionKeyManager();
+ return _sessionKeyManager;
+ }
+ private void initializeSessionKeyManager() {
+ synchronized (this) {
+ if (_sessionKeyManager == null)
+ _sessionKeyManager = new PersistentSessionKeyManager(this);
+ _sessionKeyManagerInitialized = true;
+ }
+ }
+
+ /**
+ * Pull up the naming service used in this context. The naming service itself
+ * works by querying the context's properties, so those props should be
+ * specified to customize the naming service exposed.
+ */
+ public NamingService namingService() {
+ if (!_namingServiceInitialized) initializeNamingService();
+ return _namingService;
+ }
+ private void initializeNamingService() {
+ synchronized (this) {
+ if (_namingService == null) {
+ _namingService = NamingService.createInstance(this);
+ }
+ _namingServiceInitialized = true;
+ }
+ }
+
+ public PetNameDB petnameDb() {
+ if (!_petnameDbInitialized) initializePetnameDb();
+ return _petnameDb;
+ }
+ private void initializePetnameDb() {
+ synchronized (this) {
+ if (_petnameDb == null) {
+ _petnameDb = new PetNameDB();
+ }
+ _petnameDbInitialized = true;
+ }
+ }
+
+ /**
+ * This is the ElGamal engine used within this context. While it doesn't
+ * really have anything substantial that is context specific (the algorithm
+ * just does the algorithm), it does transparently use the context for logging
+ * its performance and activity. In addition, the engine can be swapped with
+ * the context's properties (though only someone really crazy should mess with
+ * it ;)
+ */
+ public ElGamalEngine elGamalEngine() {
+ if (!_elGamalEngineInitialized) initializeElGamalEngine();
+ return _elGamalEngine;
+ }
+ private void initializeElGamalEngine() {
+ synchronized (this) {
+ if (_elGamalEngine == null) {
+ if ("off".equals(getProperty("i2p.encryption", "on")))
+ _elGamalEngine = new DummyElGamalEngine(this);
+ else
+ _elGamalEngine = new ElGamalEngine(this);
+ }
+ _elGamalEngineInitialized = true;
+ }
+ }
+
+ /**
+ * Access the ElGamal/AES+SessionTag engine for this context. The algorithm
+ * makes use of the context's sessionKeyManager to coordinate transparent
+ * access to the sessionKeys and sessionTags, as well as the context's elGamal
+ * engine (which in turn keeps stats, etc).
+ *
+ */
+ public ElGamalAESEngine elGamalAESEngine() {
+ if (!_elGamalAESEngineInitialized) initializeElGamalAESEngine();
+ return _elGamalAESEngine;
+ }
+ private void initializeElGamalAESEngine() {
+ synchronized (this) {
+ if (_elGamalAESEngine == null)
+ _elGamalAESEngine = new ElGamalAESEngine(this);
+ _elGamalAESEngineInitialized = true;
+ }
+ }
+
+ /**
+ * Ok, I'll admit it. there is no good reason for having a context specific
+ * AES engine. We dont really keep stats on it, since its just too fast to
+ * matter. Though for the crazy people out there, we do expose a way to
+ * disable it.
+ */
+ public AESEngine aes() {
+ if (!_AESEngineInitialized) initializeAESEngine();
+ return _AESEngine;
+ }
+ private void initializeAESEngine() {
+ synchronized (this) {
+ if (_AESEngine == null) {
+ if ("off".equals(getProperty("i2p.encryption", "on")))
+ _AESEngine = new AESEngine(this);
+ else
+ _AESEngine = new CryptixAESEngine(this);
+ }
+ _AESEngineInitialized = true;
+ }
+ }
+
+ /**
+ * Query the log manager for this context, which may in turn have its own
+ * set of configuration settings (loaded from the context's properties).
+ * Each context's logManager keeps its own isolated set of Log instances with
+ * their own log levels, output locations, and rotation configuration.
+ */
+ public LogManager logManager() {
+ if (!_logManagerInitialized) initializeLogManager();
+ return _logManager;
+ }
+ private void initializeLogManager() {
+ synchronized (this) {
+ if (_logManager == null)
+ _logManager = new LogManager(this);
+ _logManagerInitialized = true;
+ }
+ }
+ /**
+ * There is absolutely no good reason to make this context specific,
+ * other than for consistency, and perhaps later we'll want to
+ * include some stats.
+ */
+ public HMACGenerator hmac() {
+ if (!_hmacInitialized) initializeHMAC();
+ return _hmac;
+ }
+ private void initializeHMAC() {
+ synchronized (this) {
+ if (_hmac == null) {
+ _hmac= new HMACGenerator(this);
+ }
+ _hmacInitialized = true;
+ }
+ }
+
+ public HMAC256Generator hmac256() {
+ if (!_hmac256Initialized) initializeHMAC256();
+ return _hmac256;
+ }
+ private void initializeHMAC256() {
+ synchronized (this) {
+ if (_hmac256 == null) {
+ _hmac256 = new HMAC256Generator(this);
+ }
+ _hmac256Initialized = true;
+ }
+ }
+
+ /**
+ * Our SHA256 instance (see the hmac discussion for why its context specific)
+ *
+ */
+ public SHA256Generator sha() {
+ if (!_shaInitialized) initializeSHA();
+ return _sha;
+ }
+ private void initializeSHA() {
+ synchronized (this) {
+ if (_sha == null)
+ _sha= new SHA256Generator(this);
+ _shaInitialized = true;
+ }
+ }
+
+ /**
+ * Our DSA engine (see HMAC and SHA above)
+ *
+ */
+ public DSAEngine dsa() {
+ if (!_dsaInitialized) initializeDSA();
+ return _dsa;
+ }
+ private void initializeDSA() {
+ synchronized (this) {
+ if (_dsa == null) {
+ if ("off".equals(getProperty("i2p.encryption", "on")))
+ _dsa = new DummyDSAEngine(this);
+ else
+ _dsa = new DSAEngine(this);
+ }
+ _dsaInitialized = true;
+ }
+ }
+
+ /**
+ * Component to generate ElGamal, DSA, and Session keys. For why it is in
+ * the appContext, see the DSA, HMAC, and SHA comments above.
+ */
+ public KeyGenerator keyGenerator() {
+ if (!_keyGeneratorInitialized) initializeKeyGenerator();
+ return _keyGenerator;
+ }
+ private void initializeKeyGenerator() {
+ synchronized (this) {
+ if (_keyGenerator == null)
+ _keyGenerator = new KeyGenerator(this);
+ _keyGeneratorInitialized = true;
+ }
+ }
+
+ /**
+ * The context's synchronized clock, which is kept context specific only to
+ * enable simulators to play with clock skew among different instances.
+ *
+ */
+ public Clock clock() {
+ if (!_clockInitialized) initializeClock();
+ return _clock;
+ }
+ private void initializeClock() {
+ synchronized (this) {
+ if (_clock == null)
+ _clock = new Clock(this);
+ _clockInitialized = true;
+ }
+ }
+
+ /**
+ * Determine how much do we want to mess with the keys to turn them
+ * into something we can route. This is context specific because we
+ * may want to test out how things react when peers don't agree on
+ * how to skew.
+ *
+ */
+ public RoutingKeyGenerator routingKeyGenerator() {
+ if (!_routingKeyGeneratorInitialized) initializeRoutingKeyGenerator();
+ return _routingKeyGenerator;
+ }
+ private void initializeRoutingKeyGenerator() {
+ synchronized (this) {
+ if (_routingKeyGenerator == null)
+ _routingKeyGenerator = new RoutingKeyGenerator(this);
+ _routingKeyGeneratorInitialized = true;
+ }
+ }
+
+ /**
+ * [insert snarky comment here]
+ *
+ */
+ public RandomSource random() {
+ if (!_randomInitialized) initializeRandom();
+ return _random;
+ }
+ private void initializeRandom() {
+ synchronized (this) {
+ if (_random == null) {
+ if (true)
+ _random = new FortunaRandomSource(this);
+ else if ("true".equals(getProperty("i2p.weakPRNG", "false")))
+ _random = new DummyPooledRandomSource(this);
+ else
+ _random = new PooledRandomSource(this);
+ }
+ _randomInitialized = true;
+ }
+ }
+}
diff --git a/src/net/i2p/I2PException.java b/src/net/i2p/I2PException.java
new file mode 100644
index 0000000..5eb3801
--- /dev/null
+++ b/src/net/i2p/I2PException.java
@@ -0,0 +1,50 @@
+package net.i2p;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.io.PrintStream;
+import java.io.PrintWriter;
+
+/**
+ * Base class of I2P exceptions
+ *
+ * @author jrandom
+ */
+public class I2PException extends Exception {
+ private Throwable _source;
+
+ public I2PException() {
+ this(null, null);
+ }
+
+ public I2PException(String msg) {
+ this(msg, null);
+ }
+
+ public I2PException(String msg, Throwable source) {
+ super(msg);
+ _source = source;
+ }
+
+ public void printStackTrace() {
+ if (_source != null) _source.printStackTrace();
+ super.printStackTrace();
+ }
+
+ public void printStackTrace(PrintStream ps) {
+ if (_source != null) _source.printStackTrace(ps);
+ super.printStackTrace(ps);
+ }
+
+ public void printStackTrace(PrintWriter pw) {
+ if (_source != null) _source.printStackTrace(pw);
+ super.printStackTrace(pw);
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/client/naming/AddressDB.java b/src/net/i2p/client/naming/AddressDB.java
new file mode 100644
index 0000000..2ad38c3
--- /dev/null
+++ b/src/net/i2p/client/naming/AddressDB.java
@@ -0,0 +1,59 @@
+package net.i2p.client.naming;
+
+import java.lang.reflect.Constructor;
+import java.util.Collection;
+
+import net.i2p.I2PAppContext;
+import net.i2p.util.Log;
+import net.i2p.data.Address;
+
+public abstract class AddressDB {
+
+ private final static Log _log = new Log(NamingService.class);
+ protected I2PAppContext _context;
+
+ /** what classname should be used as the address db impl? */
+ public static final String PROP_IMPL = "i2p.addressdb.impl";
+ private static final String DEFAULT_IMPL = "net.i2p.client.naming.FilesystemAddressDB";
+
+ /**
+ * The address db should only be constructed and accessed through the
+ * application context. This constructor should only be used by the
+ * appropriate application context itself.
+ *
+ */
+ protected AddressDB(I2PAppContext context) {
+ _context = context;
+ }
+
+ private AddressDB() { // nop
+ }
+
+ /**
+ * Get an address db instance. This method ensures that there
+ * will be only one address db instance (singleton) as well as
+ * choose the implementation from the "i2p.addressdb.impl" system
+ * property.
+ */
+ public static final synchronized AddressDB createInstance(I2PAppContext context) {
+ AddressDB instance = null;
+ String impl = context.getProperty(PROP_IMPL, DEFAULT_IMPL);
+ try {
+ Class cls = Class.forName(impl);
+ Constructor con = cls.getConstructor(new Class[] { I2PAppContext.class });
+ instance = (AddressDB)con.newInstance(new Object[] { context });
+ } catch (Exception ex) {
+ _log.error("Cannot load address db implementation", ex);
+ instance = new DummyAddressDB(context); // fallback
+ }
+ return instance;
+ }
+
+ public abstract Address get(String hostname);
+ public abstract Address put(Address address);
+ public abstract Address remove(String hostname);
+ public abstract Address remove(Address address);
+ public abstract boolean contains(Address address);
+ public abstract boolean contains(String hostname);
+ public abstract Collection hostnames();
+}
diff --git a/src/net/i2p/client/naming/AddressDBNamingService.java b/src/net/i2p/client/naming/AddressDBNamingService.java
new file mode 100644
index 0000000..04abba4
--- /dev/null
+++ b/src/net/i2p/client/naming/AddressDBNamingService.java
@@ -0,0 +1,42 @@
+package net.i2p.client.naming;
+
+import java.util.Iterator;
+
+import net.i2p.I2PAppContext;
+import net.i2p.data.Destination;
+import net.i2p.data.Address;
+
+public class AddressDBNamingService extends NamingService {
+
+ private AddressDB _addressdb;
+
+ public AddressDBNamingService(I2PAppContext context) {
+ super(context);
+ _addressdb = AddressDB.createInstance(context);
+ }
+
+ private AddressDBNamingService() {
+ super(null);
+ }
+
+ public Destination lookup(String hostname) {
+ Address addr = _addressdb.get(hostname);
+ if (addr != null) {
+ return addr.getDestination();
+ } else {
+ // If we can't find hostname in the addressdb, assume it's a key.
+ return lookupBase64(hostname);
+ }
+ }
+
+ public String reverseLookup(Destination dest) {
+ Iterator iter = _addressdb.hostnames().iterator();
+ while (iter.hasNext()) {
+ Address addr = _addressdb.get((String)iter.next());
+ if (addr != null && addr.getDestination().equals(dest)) {
+ return addr.getHostname();
+ }
+ }
+ return null;
+ }
+}
diff --git a/src/net/i2p/client/naming/DummyAddressDB.java b/src/net/i2p/client/naming/DummyAddressDB.java
new file mode 100644
index 0000000..3d151b5
--- /dev/null
+++ b/src/net/i2p/client/naming/DummyAddressDB.java
@@ -0,0 +1,42 @@
+package net.i2p.client.naming;
+
+import java.util.Collection;
+
+import net.i2p.I2PAppContext;
+import net.i2p.data.Address;
+
+public class DummyAddressDB extends AddressDB {
+
+ public DummyAddressDB(I2PAppContext context) {
+ super(context);
+ }
+
+ public Address get(String hostname) {
+ return null;
+ }
+
+ public Address put(Address address) {
+ return null;
+ }
+
+ public Address remove(String hostname) {
+ return null;
+ }
+
+ public Address remove(Address address) {
+ return null;
+ }
+
+ public boolean contains(Address address) {
+ return false;
+ }
+
+ public boolean contains(String hostname) {
+ return false;
+ }
+
+ public Collection hostnames() {
+ return null;
+ }
+
+}
diff --git a/src/net/i2p/client/naming/DummyNamingService.java b/src/net/i2p/client/naming/DummyNamingService.java
new file mode 100644
index 0000000..e956dfc
--- /dev/null
+++ b/src/net/i2p/client/naming/DummyNamingService.java
@@ -0,0 +1,33 @@
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by mihi in 2004 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ */
+package net.i2p.client.naming;
+
+import net.i2p.I2PAppContext;
+import net.i2p.data.Destination;
+
+/**
+ * A Dummy naming service that can only handle base64 destinations.
+ */
+class DummyNamingService extends NamingService {
+ /**
+ * The naming service should only be constructed and accessed through the
+ * application context. This constructor should only be used by the
+ * appropriate application context itself.
+ *
+ */
+ protected DummyNamingService(I2PAppContext context) { super(context); }
+ private DummyNamingService() { super(null); }
+
+ public Destination lookup(String hostname) {
+ return lookupBase64(hostname);
+ }
+
+ public String reverseLookup(Destination dest) {
+ return null;
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/client/naming/FilesystemAddressDB.java b/src/net/i2p/client/naming/FilesystemAddressDB.java
new file mode 100644
index 0000000..4a2e37e
--- /dev/null
+++ b/src/net/i2p/client/naming/FilesystemAddressDB.java
@@ -0,0 +1,118 @@
+package net.i2p.client.naming;
+
+import java.util.Collection;
+import java.util.Arrays;
+import java.util.Properties;
+import java.util.Iterator;
+import java.io.*;
+
+import net.i2p.I2PAppContext;
+import net.i2p.data.Address;
+import net.i2p.data.DataFormatException;
+import net.i2p.data.DataHelper;
+import net.i2p.util.Log;
+
+public class FilesystemAddressDB extends AddressDB {
+
+ public final static String PROP_ADDRESS_DIR = "i2p.addressdir";
+ public final static String DEFAULT_ADDRESS_DIR = "addressDb";
+ private final static Log _log = new Log(FilesystemAddressDB.class);
+
+ public FilesystemAddressDB(I2PAppContext context) {
+ super(context);
+
+ //If the address db directory doesn't exist, create it, using the
+ //contents of hosts.txt.
+ String dir = _context.getProperty(PROP_ADDRESS_DIR, DEFAULT_ADDRESS_DIR);
+ File addrDir = new File(dir);
+ if (!addrDir.exists()) {
+ addrDir.mkdir();
+ Properties hosts = new Properties();
+ File hostsFile = new File("hosts.txt");
+ if (hostsFile.exists() && hostsFile.canRead()) {
+ try {
+ DataHelper.loadProps(hosts, hostsFile);
+ } catch (IOException ioe) {
+ _log.error("Error loading hosts file " + hostsFile, ioe);
+ }
+ }
+ Iterator iter = hosts.keySet().iterator();
+ while (iter.hasNext()) {
+ String hostname = (String)iter.next();
+ Address addr = new Address();
+ addr.setHostname(hostname);
+ addr.setDestination(hosts.getProperty(hostname));
+ put(addr);
+ }
+ }
+ }
+
+ public Address get(String hostname) {
+ String dir = _context.getProperty(PROP_ADDRESS_DIR, DEFAULT_ADDRESS_DIR);
+ File f = new File(dir, hostname);
+ if (f.exists() && f.canRead()) {
+ Address addr = new Address();
+ try {
+ addr.readBytes(new FileInputStream(f));
+ } catch (FileNotFoundException exp) {
+ return null;
+ } catch (DataFormatException exp) {
+ _log.error(f.getPath() + " is not a valid address file.");
+ return null;
+ } catch (IOException exp) {
+ _log.error("Error reading " + f.getPath());
+ return null;
+ }
+ return addr;
+ } else {
+ _log.warn(f.getPath() + " does not exist.");
+ return null;
+ }
+ }
+
+ public Address put(Address address) {
+ Address previous = get(address.getHostname());
+
+ String dir = _context.getProperty(PROP_ADDRESS_DIR, DEFAULT_ADDRESS_DIR);
+ File f = new File(dir, address.getHostname());
+ try {
+ address.writeBytes(new FileOutputStream(f));
+ } catch (Exception exp) {
+ _log.error("Error writing " + f.getPath(), exp);
+ }
+ return previous;
+ }
+
+ public Address remove(String hostname) {
+ Address previous = get(hostname);
+
+ String dir = _context.getProperty(PROP_ADDRESS_DIR, DEFAULT_ADDRESS_DIR);
+ File f = new File(dir, hostname);
+ f.delete();
+ return previous;
+ }
+
+ public Address remove(Address address) {
+ if (contains(address)) {
+ return remove(address.getHostname());
+ } else {
+ return null;
+ }
+ }
+
+ public boolean contains(Address address) {
+ Address inDb = get(address.getHostname());
+ return inDb.equals(address);
+ }
+
+ public boolean contains(String hostname) {
+ return hostnames().contains(hostname);
+ }
+
+ public Collection hostnames() {
+ String dir = _context.getProperty(PROP_ADDRESS_DIR, DEFAULT_ADDRESS_DIR);
+ File f = new File(dir);
+ return Arrays.asList(f.list());
+ }
+
+}
diff --git a/src/net/i2p/client/naming/HostsTxtNamingService.java b/src/net/i2p/client/naming/HostsTxtNamingService.java
new file mode 100644
index 0000000..20a5912
--- /dev/null
+++ b/src/net/i2p/client/naming/HostsTxtNamingService.java
@@ -0,0 +1,90 @@
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by mihi in 2004 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ */
+package net.i2p.client.naming;
+
+import java.io.File;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Properties;
+import java.util.StringTokenizer;
+
+import net.i2p.I2PAppContext;
+import net.i2p.data.DataHelper;
+import net.i2p.data.Destination;
+import net.i2p.util.Log;
+
+/**
+ * A naming service based on the "hosts.txt" file.
+ */
+public class HostsTxtNamingService extends NamingService {
+
+ /**
+ * The naming service should only be constructed and accessed through the
+ * application context. This constructor should only be used by the
+ * appropriate application context itself.
+ *
+ */
+ public HostsTxtNamingService(I2PAppContext context) { super(context); }
+ private HostsTxtNamingService() { super(null); }
+
+ /**
+ * If this system property is specified, the tunnel will read the
+ * given file for hostname=destKey values when resolving names
+ */
+ public final static String PROP_HOSTS_FILE = "i2p.hostsfilelist";
+
+ /** default hosts.txt filename */
+ public final static String DEFAULT_HOSTS_FILE =
+ "privatehosts.txt,userhosts.txt,hosts.txt";
+
+ private final static Log _log = new Log(HostsTxtNamingService.class);
+
+ private List getFilenames() {
+ String list = _context.getProperty(PROP_HOSTS_FILE, DEFAULT_HOSTS_FILE);
+ StringTokenizer tok = new StringTokenizer(list, ",");
+ List rv = new ArrayList(tok.countTokens());
+ while (tok.hasMoreTokens())
+ rv.add(tok.nextToken());
+ return rv;
+ }
+
+ public Destination lookup(String hostname) {
+ // check the list each time, reloading the file on each
+ // lookup
+
+ List filenames = getFilenames();
+ for (int i = 0; i < filenames.size(); i++) {
+ String hostsfile = (String)filenames.get(i);
+ Properties hosts = new Properties();
+ try {
+ File f = new File(hostsfile);
+ if ( (f.exists()) && (f.canRead()) ) {
+ DataHelper.loadProps(hosts, f, true);
+
+ String key = hosts.getProperty(hostname.toLowerCase());
+ if ( (key != null) && (key.trim().length() > 0) ) {
+ return lookupBase64(key);
+ }
+
+ } else {
+ _log.warn("Hosts file " + hostsfile + " does not exist.");
+ }
+ } catch (Exception ioe) {
+ _log.error("Error loading hosts file " + hostsfile, ioe);
+ }
+ // not found, continue to the next file
+ }
+ // If we can't find name in any of the hosts files,
+ // assume it's a key.
+ return lookupBase64(hostname);
+ }
+
+ public String reverseLookup(Destination dest) {
+ return null;
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/client/naming/MetaNamingService.java b/src/net/i2p/client/naming/MetaNamingService.java
new file mode 100644
index 0000000..00cd382
--- /dev/null
+++ b/src/net/i2p/client/naming/MetaNamingService.java
@@ -0,0 +1,60 @@
+package net.i2p.client.naming;
+
+import java.lang.reflect.Constructor;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Iterator;
+import java.util.StringTokenizer;
+
+import net.i2p.I2PAppContext;
+import net.i2p.data.Destination;
+
+public class MetaNamingService extends NamingService {
+
+ private final static String PROP_NAME_SERVICES = "i2p.nameservicelist";
+ private final static String DEFAULT_NAME_SERVICES =
+ "net.i2p.client.naming.PetNameNamingService,net.i2p.client.naming.HostsTxtNamingService";
+ private List _services;
+
+ public MetaNamingService(I2PAppContext context) {
+ super(context);
+
+ String list = _context.getProperty(PROP_NAME_SERVICES, DEFAULT_NAME_SERVICES);
+ StringTokenizer tok = new StringTokenizer(list, ",");
+ _services = new ArrayList(tok.countTokens());
+ while (tok.hasMoreTokens()) {
+ try {
+ Class cls = Class.forName(tok.nextToken());
+ Constructor con = cls.getConstructor(new Class[] { I2PAppContext.class });
+ _services.add(con.newInstance(new Object[] { context }));
+ } catch (Exception ex) {
+ _services.add(new DummyNamingService(context)); // fallback
+ }
+ }
+ }
+
+ public Destination lookup(String hostname) {
+ Iterator iter = _services.iterator();
+ while (iter.hasNext()) {
+ NamingService ns = (NamingService)iter.next();
+ Destination dest = ns.lookup(hostname);
+ if (dest != null) {
+ return dest;
+ }
+ }
+ return lookupBase64(hostname);
+ }
+
+ public String reverseLookup(Destination dest) {
+ Iterator iter = _services.iterator();
+ while (iter.hasNext()) {
+ NamingService ns = (NamingService)iter.next();
+ String hostname = ns.reverseLookup(dest);
+ if (hostname != null) {
+ return hostname;
+ }
+ }
+ return null;
+ }
+
+}
diff --git a/src/net/i2p/client/naming/NamingService.java b/src/net/i2p/client/naming/NamingService.java
new file mode 100644
index 0000000..43f0036
--- /dev/null
+++ b/src/net/i2p/client/naming/NamingService.java
@@ -0,0 +1,92 @@
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by mihi in 2004 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ */
+package net.i2p.client.naming;
+
+import java.lang.reflect.Constructor;
+
+import net.i2p.I2PAppContext;
+import net.i2p.data.DataFormatException;
+import net.i2p.data.Destination;
+import net.i2p.util.Log;
+
+/**
+ * Naming services create a subclass of this class.
+ */
+public abstract class NamingService {
+
+ private final static Log _log = new Log(NamingService.class);
+ protected I2PAppContext _context;
+
+ /** what classname should be used as the naming service impl? */
+ public static final String PROP_IMPL = "i2p.naming.impl";
+ private static final String DEFAULT_IMPL = "net.i2p.client.naming.MetaNamingService";
+
+
+ /**
+ * The naming service should only be constructed and accessed through the
+ * application context. This constructor should only be used by the
+ * appropriate application context itself.
+ *
+ */
+ protected NamingService(I2PAppContext context) {
+ _context = context;
+ }
+ private NamingService() { // nop
+ }
+
+ /**
+ * Look up a host name.
+ * @return the Destination for this host name, or
+ * null
if name is unknown.
+ */
+ public abstract Destination lookup(String hostname);
+
+ /**
+ * Reverse look up a destination
+ * @return a host name for this Destination, or null
+ * if none is known. It is safe for subclasses to always return
+ * null
if no reverse lookup is possible.
+ */
+ public abstract String reverseLookup(Destination dest);
+
+ /**
+ * Check if host name is valid Base64 encoded dest and return this
+ * dest in that case. Useful as a "fallback" in custom naming
+ * implementations.
+ */
+ protected Destination lookupBase64(String hostname) {
+ try {
+ Destination result = new Destination();
+ result.fromBase64(hostname);
+ return result;
+ } catch (DataFormatException dfe) {
+ if (_log.shouldLog(Log.WARN)) _log.warn("Error translating [" + hostname + "]", dfe);
+ return null;
+ }
+ }
+
+ /**
+ * Get a naming service instance. This method ensures that there
+ * will be only one naming service instance (singleton) as well as
+ * choose the implementation from the "i2p.naming.impl" system
+ * property.
+ */
+ public static final synchronized NamingService createInstance(I2PAppContext context) {
+ NamingService instance = null;
+ String impl = context.getProperty(PROP_IMPL, DEFAULT_IMPL);
+ try {
+ Class cls = Class.forName(impl);
+ Constructor con = cls.getConstructor(new Class[] { I2PAppContext.class });
+ instance = (NamingService)con.newInstance(new Object[] { context });
+ } catch (Exception ex) {
+ _log.error("Cannot loadNaming service implementation", ex);
+ instance = new DummyNamingService(context); // fallback
+ }
+ return instance;
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/client/naming/PetName.java b/src/net/i2p/client/naming/PetName.java
new file mode 100644
index 0000000..8b5fdda
--- /dev/null
+++ b/src/net/i2p/client/naming/PetName.java
@@ -0,0 +1,172 @@
+package net.i2p.client.naming;
+
+import java.util.*;
+import net.i2p.data.DataHelper;
+
+/**
+ *
+ */
+public class PetName {
+ private String _name;
+ private String _network;
+ private String _protocol;
+ private List _groups;
+ private boolean _isPublic;
+ private String _location;
+
+ public PetName() {
+ this(null, null, null, null);
+ }
+ public PetName(String name, String network, String protocol, String location) {
+ _name = name;
+ _network = network;
+ _protocol = protocol;
+ _location = location;
+ _groups = new ArrayList();
+ _isPublic = false;
+ }
+ /**
+ * @param dbLine name:network:protocol:isPublic:group1,group2,group3:location
+ */
+ public PetName(String dbLine) {
+ _groups = new ArrayList();
+ StringTokenizer tok = new StringTokenizer(dbLine, ":\n", true);
+ int tokens = tok.countTokens();
+ //System.out.println("Tokens: " + tokens);
+ if (tokens < 7) {
+ return;
+ }
+ String s = tok.nextToken();
+ if (":".equals(s)) {
+ _name = null;
+ } else {
+ _name = s;
+ s = tok.nextToken(); // skip past the :
+ }
+ s = tok.nextToken();
+ if (":".equals(s)) {
+ _network = null;
+ } else {
+ _network = s;
+ s = tok.nextToken(); // skip past the :
+ }
+ s = tok.nextToken();
+ if (":".equals(s)) {
+ _protocol = null;
+ } else {
+ _protocol = s;
+ s = tok.nextToken(); // skip past the :
+ }
+ s = tok.nextToken();
+ if (":".equals(s)) {
+ _isPublic = false;
+ } else {
+ if ("true".equals(s))
+ _isPublic = true;
+ else
+ _isPublic = false;
+ s = tok.nextToken(); // skip past the :
+ }
+ s = tok.nextToken();
+ if (":".equals(s)) {
+ // noop
+ } else {
+ StringTokenizer gtok = new StringTokenizer(s, ",");
+ while (gtok.hasMoreTokens())
+ _groups.add(gtok.nextToken().trim());
+ s = tok.nextToken(); // skip past the :
+ }
+ while (tok.hasMoreTokens()) {
+ if (_location == null)
+ _location = tok.nextToken();
+ else
+ _location = _location + tok.nextToken();
+ }
+ }
+
+ public String getName() { return _name; }
+ public String getNetwork() { return _network; }
+ public String getProtocol() { return _protocol; }
+ public String getLocation() { return _location; }
+ public boolean getIsPublic() { return _isPublic; }
+ public int getGroupCount() { return _groups.size(); }
+ public String getGroup(int i) { return (String)_groups.get(i); }
+
+ public void setName(String name) { _name = name; }
+ public void setNetwork(String network) { _network = network; }
+ public void setProtocol(String protocol) { _protocol = protocol; }
+ public void setLocation(String location) { _location = location; }
+ public void setIsPublic(boolean pub) { _isPublic = pub; }
+ public void addGroup(String name) {
+ if ( (name != null) && (name.length() > 0) && (!_groups.contains(name)) )
+ _groups.add(name);
+ }
+ public void removeGroup(String name) { _groups.remove(name); }
+ public void setGroups(String groups) {
+ if (groups != null) {
+ _groups.clear();
+ StringTokenizer tok = new StringTokenizer(groups, ", \t");
+ while (tok.hasMoreTokens())
+ addGroup(tok.nextToken().trim());
+ } else {
+ _groups.clear();
+ }
+ }
+ public boolean isMember(String group) {
+ for (int i = 0; i < getGroupCount(); i++)
+ if (getGroup(i).equals(group))
+ return true;
+ return false;
+ }
+
+ public String toString() {
+ StringBuffer buf = new StringBuffer(256);
+ if (_name != null) buf.append(_name.trim());
+ buf.append(':');
+ if (_network != null) buf.append(_network.trim());
+ buf.append(':');
+ if (_protocol != null) buf.append(_protocol.trim());
+ buf.append(':').append(_isPublic).append(':');
+ if (_groups != null) {
+ for (int i = 0; i < _groups.size(); i++) {
+ buf.append(((String)_groups.get(i)).trim());
+ if (i + 1 < _groups.size())
+ buf.append(',');
+ }
+ }
+ buf.append(':');
+ if (_location != null) buf.append(_location.trim());
+ return buf.toString();
+ }
+
+ public boolean equals(Object obj) {
+ if ( (obj == null) || !(obj instanceof PetName) ) return false;
+ PetName pn = (PetName)obj;
+ return DataHelper.eq(_name, pn._name) &&
+ DataHelper.eq(_location, pn._location) &&
+ DataHelper.eq(_network, pn._network) &&
+ DataHelper.eq(_protocol, pn._protocol);
+ }
+ public int hashCode() {
+ int rv = 0;
+ rv += DataHelper.hashCode(_name);
+ rv += DataHelper.hashCode(_location);
+ rv += DataHelper.hashCode(_network);
+ rv += DataHelper.hashCode(_protocol);
+ return rv;
+ }
+
+ public static void main(String args[]) {
+ test("a:b:c:true:e:f");
+ test("a:::true::d");
+ test("a:::true::");
+ test("a:b::true::");
+ test(":::trye::");
+ test("a:b:c:true:e:http://foo.bar");
+ }
+ private static void test(String line) {
+ PetName pn = new PetName(line);
+ String val = pn.toString();
+ System.out.println("OK? " + val.equals(line) + ": " + line + " [" + val + "]");
+ }
+}
diff --git a/src/net/i2p/client/naming/PetNameDB.java b/src/net/i2p/client/naming/PetNameDB.java
new file mode 100644
index 0000000..c335a93
--- /dev/null
+++ b/src/net/i2p/client/naming/PetNameDB.java
@@ -0,0 +1,103 @@
+package net.i2p.client.naming;
+
+import java.io.*;
+import java.util.*;
+
+
+/**
+ *
+ */
+public class PetNameDB {
+ /** name (String) to PetName mapping */
+ private Map _names;
+ private String _path;
+
+ public PetNameDB() {
+ _names = Collections.synchronizedMap(new HashMap());
+ }
+
+ public PetName getByName(String name) {
+ if ( (name == null) || (name.length() <= 0) ) return null;
+ return (PetName)_names.get(name.toLowerCase());
+ }
+ public void add(PetName pn) {
+ if ( (pn == null) || (pn.getName() == null) ) return;
+ _names.put(pn.getName().toLowerCase(), pn);
+ }
+ public void clear() { _names.clear(); }
+ public boolean contains(PetName pn) { return _names.containsValue(pn); }
+ public boolean containsName(String name) {
+ if ( (name == null) || (name.length() <= 0) ) return false;
+ return _names.containsKey(name.toLowerCase());
+ }
+ public boolean isEmpty() { return _names.isEmpty(); }
+ public Iterator iterator() { return new ArrayList(_names.values()).iterator(); }
+ public void remove(PetName pn) {
+ if (pn != null) _names.remove(pn.getName().toLowerCase());
+ }
+ public void removeName(String name) {
+ if (name != null) _names.remove(name.toLowerCase());
+ }
+ public int size() { return _names.size(); }
+ public Set getNames() { return new HashSet(_names.keySet()); }
+ public List getGroups() {
+ List rv = new ArrayList();
+ for (Iterator iter = iterator(); iter.hasNext(); ) {
+ PetName name = (PetName)iter.next();
+ for (int i = 0; i < name.getGroupCount(); i++)
+ if (!rv.contains(name.getGroup(i)))
+ rv.add(name.getGroup(i));
+ }
+ return rv;
+ }
+
+ public PetName getByLocation(String location) {
+ if (location == null) return null;
+ synchronized (_names) {
+ for (Iterator iter = iterator(); iter.hasNext(); ) {
+ PetName name = (PetName)iter.next();
+ if ( (name.getLocation() != null) && (name.getLocation().trim().equals(location.trim())) )
+ return name;
+ }
+ }
+ return null;
+ }
+
+ public void load(String location) throws IOException {
+ _path = location;
+ File f = new File(location);
+ if (!f.exists()) return;
+ BufferedReader in = null;
+ try {
+ in = new BufferedReader(new InputStreamReader(new FileInputStream(f), "UTF-8"));
+ String line = null;
+ while ( (line = in.readLine()) != null) {
+ PetName name = new PetName(line);
+ if (name.getName() != null)
+ add(name);
+ }
+ } finally {
+ in.close();
+ }
+ }
+
+ public void store(String location) throws IOException {
+ Writer out = null;
+ try {
+ out = new OutputStreamWriter(new FileOutputStream(location), "UTF-8");
+ for (Iterator iter = iterator(); iter.hasNext(); ) {
+ PetName name = (PetName)iter.next();
+ if (name != null)
+ out.write(name.toString() + "\n");
+ }
+ } finally {
+ out.close();
+ }
+ }
+
+ public void store() throws IOException {
+ if (_path != null) {
+ store(_path);
+ }
+ }
+}
diff --git a/src/net/i2p/client/naming/PetNameNamingService.java b/src/net/i2p/client/naming/PetNameNamingService.java
new file mode 100644
index 0000000..fb57a3c
--- /dev/null
+++ b/src/net/i2p/client/naming/PetNameNamingService.java
@@ -0,0 +1,65 @@
+package net.i2p.client.naming;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Properties;
+
+import net.i2p.I2PAppContext;
+import net.i2p.data.DataHelper;
+import net.i2p.data.Destination;
+
+public class PetNameNamingService extends NamingService {
+
+ private PetNameDB _petnameDb;
+ public final static String PROP_PETNAME_FILE = "i2p.petnamefile";
+ public final static String DEFAULT_PETNAME_FILE = "petnames.txt";
+
+ public PetNameNamingService(I2PAppContext context) {
+ super(context);
+ _petnameDb = _context.petnameDb();
+ String file = _context.getProperty(PROP_PETNAME_FILE, DEFAULT_PETNAME_FILE);
+
+ //If the petnamedb file doesn't exist, create it, using the
+ //contents of hosts.txt.
+// File nameFile = new File(file);
+// if (!nameFile.exists()) {
+// Properties hosts = new Properties();
+// File hostsFile = new File("hosts.txt");
+// if (hostsFile.exists() && hostsFile.canRead()) {
+// try {
+// DataHelper.loadProps(hosts, hostsFile);
+// } catch (IOException ioe) {
+// }
+// }
+// Iterator iter = hosts.keySet().iterator();
+// while (iter.hasNext()) {
+// String hostname = (String)iter.next();
+// PetName pn = new PetName(hostname, "i2p", "http", hosts.getProperty(hostname));
+// _petnameDb.set(hostname, pn);
+// }
+// try {
+// _petnameDb.store(file);
+// } catch (IOException ioe) {
+// }
+// }
+
+ try {
+ _petnameDb.load(file);
+ } catch (IOException ioe) {
+ }
+ }
+
+ public Destination lookup(String hostname) {
+ PetName name = _petnameDb.getByName(hostname);
+ if (name != null && name.getNetwork().equalsIgnoreCase("i2p")) {
+ return lookupBase64(name.getLocation());
+ } else {
+ return lookupBase64(hostname);
+ }
+ }
+
+ public String reverseLookup(Destination dest) {
+ return _petnameDb.getByLocation(dest.toBase64()).getName();
+ }
+}
diff --git a/src/net/i2p/crypto/AESEngine.java b/src/net/i2p/crypto/AESEngine.java
new file mode 100644
index 0000000..a67281b
--- /dev/null
+++ b/src/net/i2p/crypto/AESEngine.java
@@ -0,0 +1,181 @@
+package net.i2p.crypto;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import net.i2p.I2PAppContext;
+import net.i2p.data.DataHelper;
+import net.i2p.data.Hash;
+import net.i2p.data.SessionKey;
+import net.i2p.util.Log;
+import net.i2p.util.RandomSource;
+
+/**
+ * Dummy wrapper for AES cipher operation.
+ *
+ */
+public class AESEngine {
+ private Log _log;
+ private I2PAppContext _context;
+ public AESEngine(I2PAppContext ctx) {
+ _context = ctx;
+ _log = _context.logManager().getLog(AESEngine.class);
+ if (getClass() == AESEngine.class)
+ _log.warn("Warning: AES is disabled");
+ }
+
+ /** Encrypt the payload with the session key
+ * @param payload data to be encrypted
+ * @param payloadIndex index into the payload to start encrypting
+ * @param out where to store the result
+ * @param outIndex where in out to start writing
+ * @param sessionKey private esession key to encrypt to
+ * @param iv IV for CBC
+ * @param length how much data to encrypt
+ */
+ public void encrypt(byte payload[], int payloadIndex, byte out[], int outIndex, SessionKey sessionKey, byte iv[], int length) {
+ encrypt(payload, payloadIndex, out, outIndex, sessionKey, iv, 0, length);
+ }
+
+ /** Encrypt the payload with the session key
+ * @param payload data to be encrypted
+ * @param payloadIndex index into the payload to start encrypting
+ * @param out where to store the result
+ * @param outIndex where in out to start writing
+ * @param sessionKey private esession key to encrypt to
+ * @param iv IV for CBC
+ * @param length how much data to encrypt
+ */
+ public void encrypt(byte payload[], int payloadIndex, byte out[], int outIndex, SessionKey sessionKey, byte iv[], int ivOffset, int length) {
+ System.arraycopy(payload, payloadIndex, out, outIndex, length);
+ _log.warn("Warning: AES is disabled");
+ }
+
+ public byte[] safeEncrypt(byte payload[], SessionKey sessionKey, byte iv[], int paddedSize) {
+ if ((iv == null) || (payload == null) || (sessionKey == null) || (iv.length != 16)) return null;
+
+ int size = Hash.HASH_LENGTH
+ + 4 // sizeof(payload)
+ + payload.length;
+ int padding = ElGamalAESEngine.getPaddingSize(size, paddedSize);
+
+ byte data[] = new byte[size + padding];
+ Hash h = _context.sha().calculateHash(iv);
+
+ int cur = 0;
+ System.arraycopy(h.getData(), 0, data, cur, Hash.HASH_LENGTH);
+ cur += Hash.HASH_LENGTH;
+
+ DataHelper.toLong(data, cur, 4, payload.length);
+ cur += 4;
+ System.arraycopy(payload, 0, data, cur, payload.length);
+ cur += payload.length;
+ byte paddingData[] = ElGamalAESEngine.getPadding(_context, size, paddedSize);
+ System.arraycopy(paddingData, 0, data, cur, paddingData.length);
+
+ encrypt(data, 0, data, 0, sessionKey, iv, data.length);
+ return data;
+ }
+
+ public byte[] safeDecrypt(byte payload[], SessionKey sessionKey, byte iv[]) {
+ if ((iv == null) || (payload == null) || (sessionKey == null) || (iv.length != 16)) return null;
+
+ byte decr[] = new byte[payload.length];
+ decrypt(payload, 0, decr, 0, sessionKey, iv, payload.length);
+ if (decr == null) {
+ _log.error("Error decrypting the data - payload " + payload.length + " decrypted to null");
+ return null;
+ }
+
+ int cur = 0;
+ byte h[] = _context.sha().calculateHash(iv).getData();
+ for (int i = 0; i < Hash.HASH_LENGTH; i++) {
+ if (decr[i] != h[i]) {
+ _log.error("Hash does not match [key=" + sessionKey + " / iv =" + DataHelper.toString(iv, iv.length)
+ + "]", new Exception("Hash error"));
+ return null;
+ }
+ }
+ cur += Hash.HASH_LENGTH;
+
+ long len = DataHelper.fromLong(decr, cur, 4);
+ cur += 4;
+
+ if (cur + len > decr.length) {
+ _log.error("Not enough to read");
+ return null;
+ }
+
+ byte data[] = new byte[(int)len];
+ System.arraycopy(decr, cur, data, 0, (int)len);
+ return data;
+ }
+
+
+ /** Decrypt the data with the session key
+ * @param payload data to be decrypted
+ * @param payloadIndex index into the payload to start decrypting
+ * @param out where to store the cleartext
+ * @param outIndex where in out to start writing
+ * @param sessionKey private session key to decrypt to
+ * @param iv IV for CBC
+ * @param length how much data to decrypt
+ */
+ public void decrypt(byte payload[], int payloadIndex, byte out[], int outIndex, SessionKey sessionKey, byte iv[], int length) {
+ decrypt(payload, payloadIndex, out, outIndex, sessionKey, iv, 0, length);
+ }
+
+ /** Decrypt the data with the session key
+ * @param payload data to be decrypted
+ * @param payloadIndex index into the payload to start decrypting
+ * @param out where to store the cleartext
+ * @param outIndex where in out to start writing
+ * @param sessionKey private session key to decrypt to
+ * @param iv IV for CBC
+ * @param length how much data to decrypt
+ */
+ public void decrypt(byte payload[], int payloadIndex, byte out[], int outIndex, SessionKey sessionKey, byte iv[], int ivOffset, int length) {
+ System.arraycopy(payload, payloadIndex, out, outIndex, length);
+ _log.warn("Warning: AES is disabled");
+ }
+
+
+ public void encryptBlock(byte payload[], int inIndex, SessionKey sessionKey, byte out[], int outIndex) {
+ System.arraycopy(payload, inIndex, out, outIndex, out.length - outIndex);
+ }
+
+ /** decrypt the data with the session key provided
+ * @param payload encrypted data
+ * @param sessionKey private session key
+ */
+ public void decryptBlock(byte payload[], int inIndex, SessionKey sessionKey, byte rv[], int outIndex) {
+ System.arraycopy(payload, inIndex, rv, outIndex, rv.length - outIndex);
+ }
+
+ public static void main(String args[]) {
+ I2PAppContext ctx = new I2PAppContext();
+ SessionKey key = ctx.keyGenerator().generateSessionKey();
+ byte iv[] = new byte[16];
+ RandomSource.getInstance().nextBytes(iv);
+
+ byte sbuf[] = new byte[16];
+ RandomSource.getInstance().nextBytes(sbuf);
+ byte se[] = new byte[16];
+ ctx.aes().encrypt(sbuf, 0, se, 0, key, iv, sbuf.length);
+ byte sd[] = new byte[16];
+ ctx.aes().decrypt(se, 0, sd, 0, key, iv, se.length);
+ ctx.logManager().getLog(AESEngine.class).debug("Short test: " + DataHelper.eq(sd, sbuf));
+
+ byte lbuf[] = new byte[1024];
+ RandomSource.getInstance().nextBytes(sbuf);
+ byte le[] = ctx.aes().safeEncrypt(lbuf, key, iv, 2048);
+ byte ld[] = ctx.aes().safeDecrypt(le, key, iv);
+ ctx.logManager().getLog(AESEngine.class).debug("Long test: " + DataHelper.eq(ld, lbuf));
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/crypto/AESInputStream.java b/src/net/i2p/crypto/AESInputStream.java
new file mode 100644
index 0000000..cdc11bb
--- /dev/null
+++ b/src/net/i2p/crypto/AESInputStream.java
@@ -0,0 +1,460 @@
+package net.i2p.crypto;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.FilterInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import net.i2p.I2PAppContext;
+import net.i2p.data.Base64;
+import net.i2p.data.DataHelper;
+import net.i2p.data.Hash;
+import net.i2p.data.SessionKey;
+import net.i2p.util.Clock;
+import net.i2p.util.Log;
+import net.i2p.util.RandomSource;
+
+/**
+ * This reads an underlying stream as written by AESOutputStream - AES256 encrypted
+ * in CBC mode with PKCS#5 padding, with the padding on each and every block of
+ * 16 bytes. This minimizes the overhead when communication is intermittent,
+ * rather than when streams of large sets of data are sent (in which case, the
+ * padding would be on a larger size - say, 1k, though in the worst case that
+ * would have 1023 bytes of padding, while in the worst case here, we only have
+ * 15 bytes of padding). So we have an expansion factor of 6.25%. c'est la vie
+ *
+ */
+public class AESInputStream extends FilterInputStream {
+ private Log _log;
+ private I2PAppContext _context;
+ private SessionKey _key;
+ private byte[] _lastBlock;
+ private boolean _eofFound;
+ private long _cumulativeRead; // how many read from the source stream
+ private long _cumulativePrepared; // how many bytes decrypted and added to _readyBuf
+ private long _cumulativePaddingStripped; // how many bytes have been stripped
+
+ /** read but not yet decrypted */
+ private byte _encryptedBuf[];
+ /** how many bytes have been added to the encryptedBuf since it was decrypted? */
+ private int _writesSinceDecrypt;
+ /** decrypted bytes ready for reading (first available == index of 0) */
+ private int _decryptedBuf[];
+ /** how many bytes are available for reading without decrypt? */
+ private int _decryptedSize;
+
+ private final static int BLOCK_SIZE = CryptixRijndael_Algorithm._BLOCK_SIZE;
+
+ public AESInputStream(I2PAppContext context, InputStream source, SessionKey key, byte[] iv) {
+ super(source);
+ _context = context;
+ _log = context.logManager().getLog(AESInputStream.class);
+ _key = key;
+ _lastBlock = new byte[BLOCK_SIZE];
+ System.arraycopy(iv, 0, _lastBlock, 0, BLOCK_SIZE);
+ _encryptedBuf = new byte[BLOCK_SIZE];
+ _writesSinceDecrypt = 0;
+ _decryptedBuf = new int[BLOCK_SIZE-1];
+ _decryptedSize = 0;
+ _cumulativePaddingStripped = 0;
+ _eofFound = false;
+ }
+
+ public int read() throws IOException {
+ while ((!_eofFound) && (_decryptedSize <= 0)) {
+ refill();
+ }
+ if (_decryptedSize > 0) {
+ int c = _decryptedBuf[0];
+ System.arraycopy(_decryptedBuf, 1, _decryptedBuf, 0, _decryptedBuf.length-1);
+ _decryptedSize--;
+ return c;
+ } else if (_eofFound) {
+ return -1;
+ } else {
+ throw new IOException("Not EOF, but none available? " + _decryptedSize
+ + "/" + _writesSinceDecrypt
+ + "/" + _cumulativeRead + "... impossible");
+ }
+ }
+
+ public int read(byte dest[]) throws IOException {
+ return read(dest, 0, dest.length);
+ }
+
+ public int read(byte dest[], int off, int len) throws IOException {
+ for (int i = 0; i < len; i++) {
+ int val = read();
+ if (val == -1) {
+ // no more to read... can they expect more?
+ if (_eofFound && (i == 0)) {
+ if (_log.shouldLog(Log.DEBUG))
+ _log.info("EOF? " + _eofFound
+ + "\nread=" + i + " decryptedSize=" + _decryptedSize
+ + " \nencryptedSize=" + _writesSinceDecrypt
+ + " \ntotal=" + _cumulativeRead
+ + " \npadding=" + _cumulativePaddingStripped
+ + " \nprepared=" + _cumulativePrepared);
+ return -1;
+ } else {
+ if (i != len)
+ if (_log.shouldLog(Log.DEBUG))
+ _log.info("non-terminal eof: " + _eofFound + " i=" + i + " len=" + len);
+ }
+
+ return i;
+ }
+ dest[off+i] = (byte)val;
+ }
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Read the full buffer of size " + len);
+ return len;
+ }
+
+ public long skip(long numBytes) throws IOException {
+ for (long l = 0; l < numBytes; l++) {
+ int val = read();
+ if (val == -1) return l;
+ }
+ return numBytes;
+ }
+
+ public int available() throws IOException {
+ return _decryptedSize;
+ }
+
+ public void close() throws IOException {
+ in.close();
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Cumulative bytes read from source/decrypted/stripped: " + _cumulativeRead + "/"
+ + _cumulativePrepared + "/" + _cumulativePaddingStripped + "] remaining [" + _decryptedSize + " ready, "
+ + _writesSinceDecrypt + " still encrypted]");
+ }
+
+ public void mark(int readLimit) { // nop
+ }
+
+ public void reset() throws IOException {
+ throw new IOException("Reset not supported");
+ }
+
+ public boolean markSupported() {
+ return false;
+ }
+
+ /**
+ * Read at least one new byte from the underlying stream, and up to max new bytes,
+ * but not necessarily enough for a new decrypted block. This blocks until at least
+ * one new byte is read from the stream
+ *
+ */
+ private void refill() throws IOException {
+ if ( (!_eofFound) && (_writesSinceDecrypt < BLOCK_SIZE) ) {
+ int read = in.read(_encryptedBuf, _writesSinceDecrypt, _encryptedBuf.length - _writesSinceDecrypt);
+ if (read == -1) {
+ _eofFound = true;
+ } else if (read > 0) {
+ _cumulativeRead += read;
+ _writesSinceDecrypt += read;
+ }
+ }
+ if (_writesSinceDecrypt == BLOCK_SIZE) {
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("We have " + _writesSinceDecrypt + " available to decrypt... doing so");
+ decryptBlock();
+ if ( (_writesSinceDecrypt > 0) && (_log.shouldLog(Log.DEBUG)) )
+ _log.debug("Bytes left in the encrypted buffer after decrypt: "
+ + _writesSinceDecrypt);
+ }
+ }
+
+ /**
+ * Decrypt the
+ */
+ private void decryptBlock() throws IOException {
+ if (_writesSinceDecrypt != BLOCK_SIZE)
+ throw new IOException("Error decrypting - no data to decrypt");
+
+ if (_decryptedSize != 0)
+ throw new IOException("wtf, decrypted size is not 0? " + _decryptedSize);
+
+ _context.aes().decrypt(_encryptedBuf, 0, _encryptedBuf, 0, _key, _lastBlock, BLOCK_SIZE);
+ DataHelper.xor(_encryptedBuf, 0, _lastBlock, 0, _encryptedBuf, 0, BLOCK_SIZE);
+ int payloadBytes = countBlockPayload(_encryptedBuf, 0);
+
+ for (int i = 0; i < payloadBytes; i++) {
+ int c = _encryptedBuf[i];
+ if (c <= 0)
+ c += 256;
+ _decryptedBuf[i] = c;
+ }
+ _decryptedSize = payloadBytes;
+
+ _cumulativePaddingStripped += BLOCK_SIZE - payloadBytes;
+ _cumulativePrepared += payloadBytes;
+
+ System.arraycopy(_encryptedBuf, 0, _lastBlock, 0, BLOCK_SIZE);
+
+ _writesSinceDecrypt = 0;
+ }
+
+ /**
+ * How many non-padded bytes are there in the block starting at the given
+ * location.
+ *
+ * PKCS#5 specifies the padding for the block has the # of padding bytes
+ * located in the last byte of the block, and each of the padding bytes are
+ * equal to that value.
+ * e.g. in a 4 byte block:
+ * 0x0a padded would become
+ * 0x0a 0x03 0x03 0x03
+ * e.g. in a 4 byte block:
+ * 0x01 0x02 padded would become
+ * 0x01 0x02 0x02 0x02
+ *
+ * We use 16 byte blocks in this AES implementation
+ *
+ * @throws IOException if the padding is invalid
+ */
+ private int countBlockPayload(byte data[], int startIndex) throws IOException {
+ int numPadBytes = data[startIndex + BLOCK_SIZE - 1];
+ if ((numPadBytes >= BLOCK_SIZE) || (numPadBytes <= 0)) {
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("countBlockPayload on block index " + startIndex
+ + numPadBytes + " is an invalid # of pad bytes");
+ throw new IOException("Invalid number of pad bytes (" + numPadBytes
+ + ") for " + startIndex + " index");
+ }
+
+ // optional, but a really good idea: verify the padding
+ if (true) {
+ for (int i = BLOCK_SIZE - numPadBytes; i < BLOCK_SIZE; i++) {
+ if (data[startIndex + i] != (byte) numPadBytes) {
+ throw new IOException("Incorrect padding on decryption: data[" + i
+ + "] = " + data[startIndex + i] + " not " + numPadBytes);
+ }
+ }
+ }
+
+ return BLOCK_SIZE - numPadBytes;
+ }
+
+ int remainingBytes() {
+ return _writesSinceDecrypt;
+ }
+
+ int readyBytes() {
+ return _decryptedSize;
+ }
+
+ /**
+ * Test AESOutputStream/AESInputStream
+ */
+ public static void main(String args[]) {
+ I2PAppContext ctx = new I2PAppContext();
+
+ try {
+ System.out.println("pwd=" + new java.io.File(".").getAbsolutePath());
+ System.out.println("Beginning");
+ runTest(ctx);
+ } catch (Throwable e) {
+ ctx.logManager().getLog(AESInputStream.class).error("Fail", e);
+ }
+ try { Thread.sleep(30*1000); } catch (InterruptedException ie) {}
+ System.out.println("Done");
+ }
+ private static void runTest(I2PAppContext ctx) {
+ Log log = ctx.logManager().getLog(AESInputStream.class);
+ log.setMinimumPriority(Log.DEBUG);
+ byte orig[] = new byte[1024 * 32];
+ RandomSource.getInstance().nextBytes(orig);
+ //byte orig[] = "you are my sunshine, my only sunshine".getBytes();
+ SessionKey key = KeyGenerator.getInstance().generateSessionKey();
+ byte iv[] = "there once was a".getBytes();
+
+ for (int i = 0; i < 20; i++) {
+ runTest(ctx, orig, key, iv);
+ }
+
+ log.info("Done testing 32KB data");
+
+ orig = new byte[20];
+ RandomSource.getInstance().nextBytes(orig);
+ for (int i = 0; i < 20; i++) {
+ runTest(ctx, orig, key, iv);
+ }
+
+ log.info("Done testing 20 byte data");
+
+ orig = new byte[3];
+ RandomSource.getInstance().nextBytes(orig);
+ for (int i = 0; i < 20; i++) {
+ runTest(ctx, orig, key, iv);
+ }
+
+ log.info("Done testing 3 byte data");
+
+ orig = new byte[0];
+ RandomSource.getInstance().nextBytes(orig);
+ for (int i = 0; i < 20; i++) {
+ runTest(ctx, orig, key, iv);
+ }
+
+ log.info("Done testing 0 byte data");
+
+ for (int i = 0; i <= 32768; i++) {
+ orig = new byte[i];
+ ctx.random().nextBytes(orig);
+ try {
+ log.info("Testing " + orig.length);
+ runTest(ctx, orig, key, iv);
+ } catch (RuntimeException re) {
+ log.error("Error testing " + orig.length);
+ throw re;
+ }
+ }
+
+/*
+ orig = new byte[615280];
+
+ RandomSource.getInstance().nextBytes(orig);
+ for (int i = 0; i < 20; i++) {
+ runTest(ctx, orig, key, iv);
+ }
+
+ log.info("Done testing 615280 byte data");
+*/
+ /*
+ for (int i = 0; i < 100; i++) {
+ orig = new byte[ctx.random().nextInt(1024*1024)];
+ ctx.random().nextBytes(orig);
+ try {
+ runTest(ctx, orig, key, iv);
+ } catch (RuntimeException re) {
+ log.error("Error testing " + orig.length);
+ throw re;
+ }
+ }
+
+ log.info("Done testing 100 random lengths");
+ */
+
+ orig = new byte[32];
+ RandomSource.getInstance().nextBytes(orig);
+ try {
+ runOffsetTest(ctx, orig, key, iv);
+ } catch (Exception e) {
+ log.info("Error running offset test", e);
+ }
+
+ log.info("Done testing offset test (it should have come back with a statement NOT EQUAL!)");
+
+ try {
+ Thread.sleep(30 * 1000);
+ } catch (InterruptedException ie) { // nop
+ }
+ }
+
+ private static void runTest(I2PAppContext ctx, byte orig[], SessionKey key, byte[] iv) {
+ Log log = ctx.logManager().getLog(AESInputStream.class);
+ try {
+ long start = Clock.getInstance().now();
+ ByteArrayOutputStream origStream = new ByteArrayOutputStream(512);
+ AESOutputStream out = new AESOutputStream(ctx, origStream, key, iv);
+ out.write(orig);
+ out.close();
+
+ byte encrypted[] = origStream.toByteArray();
+ long endE = Clock.getInstance().now();
+
+ ByteArrayInputStream encryptedStream = new ByteArrayInputStream(encrypted);
+ AESInputStream sin = new AESInputStream(ctx, encryptedStream, key, iv);
+ ByteArrayOutputStream baos = new ByteArrayOutputStream(512);
+ byte buf[] = new byte[1024 * 32];
+ int read = DataHelper.read(sin, buf);
+ if (read > 0) baos.write(buf, 0, read);
+ sin.close();
+ byte fin[] = baos.toByteArray();
+ long end = Clock.getInstance().now();
+ Hash origHash = SHA256Generator.getInstance().calculateHash(orig);
+
+ Hash newHash = SHA256Generator.getInstance().calculateHash(fin);
+ boolean eq = origHash.equals(newHash);
+ if (eq) {
+ //log.info("Equal hashes. hash: " + origHash);
+ } else {
+ throw new RuntimeException("NOT EQUAL! len=" + orig.length + " read=" + read
+ + "\norig: \t" + Base64.encode(orig) + "\nnew : \t"
+ + Base64.encode(fin));
+ }
+ boolean ok = DataHelper.eq(orig, fin);
+ log.debug("EQ data? " + ok + " origLen: " + orig.length + " fin.length: " + fin.length);
+ log.debug("Time to D(E(" + orig.length + ")): " + (end - start) + "ms");
+ log.debug("Time to E(" + orig.length + "): " + (endE - start) + "ms");
+ log.debug("Time to D(" + orig.length + "): " + (end - endE) + "ms");
+
+ } catch (IOException ioe) {
+ log.error("ERROR transferring", ioe);
+ }
+ //try { Thread.sleep(5000); } catch (Throwable t) {}
+ }
+
+ private static void runOffsetTest(I2PAppContext ctx, byte orig[], SessionKey key, byte[] iv) {
+ Log log = ctx.logManager().getLog(AESInputStream.class);
+ try {
+ long start = Clock.getInstance().now();
+ ByteArrayOutputStream origStream = new ByteArrayOutputStream(512);
+ AESOutputStream out = new AESOutputStream(ctx, origStream, key, iv);
+ out.write(orig);
+ out.close();
+
+ byte encrypted[] = origStream.toByteArray();
+ long endE = Clock.getInstance().now();
+
+ log.info("Encrypted segment length: " + encrypted.length);
+ byte encryptedSegment[] = new byte[40];
+ System.arraycopy(encrypted, 0, encryptedSegment, 0, 40);
+
+ ByteArrayInputStream encryptedStream = new ByteArrayInputStream(encryptedSegment);
+ AESInputStream sin = new AESInputStream(ctx, encryptedStream, key, iv);
+ ByteArrayOutputStream baos = new ByteArrayOutputStream(512);
+ byte buf[] = new byte[1024 * 32];
+ int read = DataHelper.read(sin, buf);
+ int remaining = sin.remainingBytes();
+ int readyBytes = sin.readyBytes();
+ log.info("Read: " + read);
+ if (read > 0) baos.write(buf, 0, read);
+ sin.close();
+ byte fin[] = baos.toByteArray();
+ log.info("fin.length: " + fin.length + " remaining: " + remaining + " ready: " + readyBytes);
+ long end = Clock.getInstance().now();
+ Hash origHash = SHA256Generator.getInstance().calculateHash(orig);
+
+ Hash newHash = SHA256Generator.getInstance().calculateHash(fin);
+ boolean eq = origHash.equals(newHash);
+ if (eq)
+ log.info("Equal hashes. hash: " + origHash);
+ else
+ throw new RuntimeException("NOT EQUAL! len=" + orig.length + "\norig: \t" + Base64.encode(orig) + "\nnew : \t" + Base64.encode(fin));
+ boolean ok = DataHelper.eq(orig, fin);
+ log.debug("EQ data? " + ok + " origLen: " + orig.length + " fin.length: " + fin.length);
+ log.debug("Time to D(E(" + orig.length + ")): " + (end - start) + "ms");
+ log.debug("Time to E(" + orig.length + "): " + (endE - start) + "ms");
+ log.debug("Time to D(" + orig.length + "): " + (end - endE) + "ms");
+ } catch (RuntimeException re) {
+ throw re;
+ } catch (IOException ioe) {
+ log.error("ERROR transferring", ioe);
+ }
+ //try { Thread.sleep(5000); } catch (Throwable t) {}
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/crypto/AESOutputStream.java b/src/net/i2p/crypto/AESOutputStream.java
new file mode 100644
index 0000000..3deea7f
--- /dev/null
+++ b/src/net/i2p/crypto/AESOutputStream.java
@@ -0,0 +1,147 @@
+package net.i2p.crypto;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.io.FilterOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.Arrays;
+
+import net.i2p.I2PAppContext;
+import net.i2p.data.DataHelper;
+import net.i2p.data.SessionKey;
+import net.i2p.util.Log;
+
+/**
+ * This writes everything as CBC with PKCS#5 padding, but each block is padded
+ * so as soon as a block is received it can be decrypted (rather than wait for
+ * an arbitrary number of blocks to arrive). That means that each block sent
+ * will contain exactly one padding byte (unless it was flushed with
+ * numBytes % (BLOCK_SIZE-1) != 0, in which case that last block will be padded
+ * with up to 15 bytes). So we have an expansion factor of 6.25%. c'est la vie
+ *
+ */
+public class AESOutputStream extends FilterOutputStream {
+ private Log _log;
+ private I2PAppContext _context;
+ private SessionKey _key;
+ private byte[] _lastBlock;
+ /**
+ * buffer containing the unwritten bytes. The first unwritten
+ * byte is _lastCommitted+1, and the last unwritten byte is _nextWrite-1
+ * (aka the next byte to be written on the array is _nextWrite)
+ */
+ private byte[] _unencryptedBuf;
+ private byte _writeBlock[];
+ /** how many bytes have we been given since we flushed it to the stream? */
+ private int _writesSinceCommit;
+ private long _cumulativeProvided; // how many bytes provided to this stream
+ private long _cumulativeWritten; // how many bytes written to the underlying stream
+ private long _cumulativePadding; // how many bytes of padding written
+
+ public final static float EXPANSION_FACTOR = 1.0625f; // 6% overhead w/ the padding
+
+ private final static int BLOCK_SIZE = CryptixRijndael_Algorithm._BLOCK_SIZE;
+ private final static int MAX_BUF = 256;
+
+ public AESOutputStream(I2PAppContext context, OutputStream source, SessionKey key, byte[] iv) {
+ super(source);
+ _context = context;
+ _log = context.logManager().getLog(AESOutputStream.class);
+ _key = key;
+ _lastBlock = new byte[BLOCK_SIZE];
+ System.arraycopy(iv, 0, _lastBlock, 0, BLOCK_SIZE);
+ _unencryptedBuf = new byte[MAX_BUF];
+ _writeBlock = new byte[BLOCK_SIZE];
+ _writesSinceCommit = 0;
+ }
+
+ public void write(int val) throws IOException {
+ _cumulativeProvided++;
+ _unencryptedBuf[_writesSinceCommit++] = (byte)(val & 0xFF);
+ if (_writesSinceCommit == _unencryptedBuf.length)
+ doFlush();
+ }
+
+ public void write(byte src[]) throws IOException {
+ write(src, 0, src.length);
+ }
+
+ public void write(byte src[], int off, int len) throws IOException {
+ // i'm too lazy to unroll this into the partial writes (dealing with
+ // wrapping around the buffer size)
+ for (int i = 0; i < len; i++)
+ write(src[i+off]);
+ }
+
+ public void close() throws IOException {
+ flush();
+ out.close();
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Cumulative bytes provided to this stream / written out / padded: "
+ + _cumulativeProvided + "/" + _cumulativeWritten + "/" + _cumulativePadding);
+ }
+
+ public void flush() throws IOException {
+ doFlush();
+ out.flush();
+ }
+
+ private void doFlush() throws IOException {
+ if (_log.shouldLog(Log.INFO))
+ _log.info("doFlush(): writesSinceCommit=" + _writesSinceCommit);
+ writeEncrypted();
+ _writesSinceCommit = 0;
+ }
+
+ /**
+ * Encrypt an arbitrary size array with AES using CBC and PKCS#5 padding,
+ * write it to the stream, and set _lastBlock to the last encrypted
+ * block. This operation works by taking every (BLOCK_SIZE-1) bytes
+ * from the src, padding it with PKCS#5 (aka adding 0x01), and encrypting
+ * it. If the last block doesn't contain exactly (BLOCK_SIZE-1) bytes, it
+ * is padded with PKCS#5 as well (adding # padding bytes repeated that many
+ * times).
+ *
+ */
+ private void writeEncrypted() throws IOException {
+ int numBlocks = _writesSinceCommit / (BLOCK_SIZE - 1);
+
+ if (_log.shouldLog(Log.INFO))
+ _log.info("writeE(): #=" + _writesSinceCommit + " blocks=" + numBlocks);
+
+ for (int i = 0; i < numBlocks; i++) {
+ DataHelper.xor(_unencryptedBuf, i * 15, _lastBlock, 0, _writeBlock, 0, 15);
+ // the padding byte for "full" blocks
+ _writeBlock[BLOCK_SIZE - 1] = (byte)(_lastBlock[BLOCK_SIZE - 1] ^ 0x01);
+ _context.aes().encrypt(_writeBlock, 0, _writeBlock, 0, _key, _lastBlock, BLOCK_SIZE);
+ out.write(_writeBlock);
+ System.arraycopy(_writeBlock, 0, _lastBlock, 0, BLOCK_SIZE);
+ _cumulativeWritten += BLOCK_SIZE;
+ _cumulativePadding++;
+ }
+
+ if (_writesSinceCommit % 15 != 0) {
+ // we need to do non trivial padding
+ int remainingBytes = _writesSinceCommit - numBlocks * 15;
+ int paddingBytes = BLOCK_SIZE - remainingBytes;
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Padding " + _writesSinceCommit + " with " + paddingBytes + " bytes in " + (numBlocks+1) + " blocks");
+ System.arraycopy(_unencryptedBuf, numBlocks * 15, _writeBlock, 0, remainingBytes);
+ Arrays.fill(_writeBlock, remainingBytes, BLOCK_SIZE, (byte) paddingBytes);
+ DataHelper.xor(_writeBlock, 0, _lastBlock, 0, _writeBlock, 0, BLOCK_SIZE);
+ _context.aes().encrypt(_writeBlock, 0, _writeBlock, 0, _key, _lastBlock, BLOCK_SIZE);
+ out.write(_writeBlock);
+ System.arraycopy(_writeBlock, 0, _lastBlock, 0, BLOCK_SIZE);
+ _cumulativePadding += paddingBytes;
+ _cumulativeWritten += BLOCK_SIZE;
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/crypto/CryptixAESEngine.java b/src/net/i2p/crypto/CryptixAESEngine.java
new file mode 100644
index 0000000..626869c
--- /dev/null
+++ b/src/net/i2p/crypto/CryptixAESEngine.java
@@ -0,0 +1,275 @@
+package net.i2p.crypto;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.security.InvalidKeyException;
+
+import net.i2p.I2PAppContext;
+import net.i2p.data.ByteArray;
+import net.i2p.data.DataHelper;
+import net.i2p.data.SessionKey;
+import net.i2p.util.ByteCache;
+import net.i2p.util.Log;
+
+/**
+ * Wrapper for AES cypher operation using Cryptix's Rijndael implementation. Implements
+ * CBC with a 16 byte IV.
+ * Problems:
+ * Only supports data of size mod 16 bytes - no inherent padding.
+ *
+ * @author jrandom, thecrypto
+ */
+public class CryptixAESEngine extends AESEngine {
+ private Log _log;
+ private final static CryptixRijndael_Algorithm _algo = new CryptixRijndael_Algorithm();
+ private final static boolean USE_FAKE_CRYPTO = false;
+ private final static byte FAKE_KEY = 0x2A;
+ private CryptixAESKeyCache _cache;
+
+ private static final ByteCache _prevCache = ByteCache.getInstance(16, 16);
+
+ public CryptixAESEngine(I2PAppContext context) {
+ super(context);
+ _log = context.logManager().getLog(CryptixAESEngine.class);
+ _cache = new CryptixAESKeyCache();
+ }
+
+ public void encrypt(byte payload[], int payloadIndex, byte out[], int outIndex, SessionKey sessionKey, byte iv[], int length) {
+ encrypt(payload, payloadIndex, out, outIndex, sessionKey, iv, 0, length);
+ }
+
+ public void encrypt(byte payload[], int payloadIndex, byte out[], int outIndex, SessionKey sessionKey, byte iv[], int ivOffset, int length) {
+ if ( (payload == null) || (out == null) || (sessionKey == null) || (iv == null) )
+ throw new NullPointerException("invalid args to aes");
+ if (payload.length < payloadIndex + length)
+ throw new IllegalArgumentException("Payload is too short");
+ if (out.length < outIndex + length)
+ throw new IllegalArgumentException("Output is too short");
+ if (length <= 0)
+ throw new IllegalArgumentException("Length is too small");
+ if (length % 16 != 0)
+ throw new IllegalArgumentException("Only lengths mod 16 are supported here");
+
+ if (USE_FAKE_CRYPTO) {
+ _log.warn("AES Crypto disabled! Using trivial XOR");
+ System.arraycopy(payload, payloadIndex, out, outIndex, length);
+ return;
+ }
+
+ int numblock = length / 16;
+
+ DataHelper.xor(iv, ivOffset, payload, payloadIndex, out, outIndex, 16);
+ encryptBlock(out, outIndex, sessionKey, out, outIndex);
+ for (int x = 1; x < numblock; x++) {
+ DataHelper.xor(out, outIndex + (x-1) * 16, payload, payloadIndex + x * 16, out, outIndex + x * 16, 16);
+ encryptBlock(out, outIndex + x * 16, sessionKey, out, outIndex + x * 16);
+ }
+ }
+
+ public void decrypt(byte payload[], int payloadIndex, byte out[], int outIndex, SessionKey sessionKey, byte iv[], int length) {
+ decrypt(payload, payloadIndex, out, outIndex, sessionKey, iv, 0, length);
+ }
+ public void decrypt(byte payload[], int payloadIndex, byte out[], int outIndex, SessionKey sessionKey, byte iv[], int ivOffset, int length) {
+ if ((iv== null) || (payload == null) || (payload.length <= 0) || (sessionKey == null) )
+ throw new IllegalArgumentException("bad setup");
+ else if (out == null)
+ throw new IllegalArgumentException("out is null");
+ else if (out.length - outIndex < length)
+ throw new IllegalArgumentException("out is too small (out.length=" + out.length
+ + " outIndex=" + outIndex + " length=" + length);
+
+ if (USE_FAKE_CRYPTO) {
+ _log.warn("AES Crypto disabled! Using trivial XOR");
+ System.arraycopy(payload, payloadIndex, out, outIndex, length);
+ return ;
+ }
+
+ int numblock = length / 16;
+ if (length % 16 != 0) numblock++;
+
+ ByteArray prevA = _prevCache.acquire();
+ byte prev[] = prevA.getData();
+ ByteArray curA = _prevCache.acquire();
+ byte cur[] = curA.getData();
+ System.arraycopy(iv, ivOffset, prev, 0, 16);
+
+ for (int x = 0; x < numblock; x++) {
+ System.arraycopy(payload, payloadIndex + (x * 16), cur, 0, 16);
+ decryptBlock(payload, payloadIndex + (x * 16), sessionKey, out, outIndex + (x * 16));
+ DataHelper.xor(out, outIndex + x * 16, prev, 0, out, outIndex + x * 16, 16);
+ iv = prev; // just use IV to switch 'em around
+ prev = cur;
+ cur = iv;
+ }
+
+ /*
+ decryptBlock(payload, payloadIndex, sessionKey, out, outIndex);
+ DataHelper.xor(out, outIndex, iv, 0, out, outIndex, 16);
+ for (int x = 1; x < numblock; x++) {
+ decryptBlock(payload, payloadIndex + (x * 16), sessionKey, out, outIndex + (x * 16));
+ DataHelper.xor(out, outIndex + x * 16, payload, payloadIndex + (x - 1) * 16, out, outIndex + x * 16, 16);
+ }
+ */
+
+ _prevCache.release(prevA);
+ _prevCache.release(curA);
+ }
+
+ public final void encryptBlock(byte payload[], int inIndex, SessionKey sessionKey, byte out[], int outIndex) {
+ if (sessionKey.getPreparedKey() == null) {
+ try {
+ Object key = CryptixRijndael_Algorithm.makeKey(sessionKey.getData(), 16);
+ sessionKey.setPreparedKey(key);
+ } catch (InvalidKeyException ike) {
+ _log.log(Log.CRIT, "Invalid key", ike);
+ throw new IllegalArgumentException("wtf, invalid key? " + ike.getMessage());
+ }
+ }
+
+ CryptixRijndael_Algorithm.blockEncrypt(payload, out, inIndex, outIndex, sessionKey.getPreparedKey(), 16);
+ }
+
+ /** decrypt the data with the session key provided
+ * @param payload encrypted data
+ * @param sessionKey private session key
+ */
+ public final void decryptBlock(byte payload[], int inIndex, SessionKey sessionKey, byte rv[], int outIndex) {
+ if ( (payload == null) || (rv == null) )
+ throw new IllegalArgumentException("null block args [payload=" + payload + " rv="+rv);
+ if (payload.length - inIndex > rv.length - outIndex)
+ throw new IllegalArgumentException("bad block args [payload.len=" + payload.length
+ + " inIndex=" + inIndex + " rv.len=" + rv.length
+ + " outIndex="+outIndex);
+ if (sessionKey.getPreparedKey() == null) {
+ try {
+ Object key = CryptixRijndael_Algorithm.makeKey(sessionKey.getData(), 16);
+ sessionKey.setPreparedKey(key);
+ } catch (InvalidKeyException ike) {
+ _log.log(Log.CRIT, "Invalid key", ike);
+ throw new IllegalArgumentException("wtf, invalid key? " + ike.getMessage());
+ }
+ }
+
+ CryptixRijndael_Algorithm.blockDecrypt(payload, rv, inIndex, outIndex, sessionKey.getPreparedKey(), 16);
+ }
+
+ public static void main(String args[]) {
+ I2PAppContext ctx = new I2PAppContext();
+ try {
+ testEDBlock(ctx);
+ testEDBlock2(ctx);
+ testED(ctx);
+ testED2(ctx);
+ //testFake(ctx);
+ //testNull(ctx);
+ } catch (Exception e) {
+ e.printStackTrace();
+ }
+ try { Thread.sleep(5*1000); } catch (InterruptedException ie) {}
+ }
+ private static void testED(I2PAppContext ctx) {
+ SessionKey key = ctx.keyGenerator().generateSessionKey();
+ byte iv[] = new byte[16];
+ byte orig[] = new byte[128];
+ byte encrypted[] = new byte[128];
+ byte decrypted[] = new byte[128];
+ ctx.random().nextBytes(iv);
+ ctx.random().nextBytes(orig);
+ CryptixAESEngine aes = new CryptixAESEngine(ctx);
+ aes.encrypt(orig, 0, encrypted, 0, key, iv, orig.length);
+ aes.decrypt(encrypted, 0, decrypted, 0, key, iv, encrypted.length);
+ if (!DataHelper.eq(decrypted,orig))
+ throw new RuntimeException("full D(E(orig)) != orig");
+ else
+ System.out.println("full D(E(orig)) == orig");
+ }
+ private static void testED2(I2PAppContext ctx) {
+ SessionKey key = ctx.keyGenerator().generateSessionKey();
+ byte iv[] = new byte[16];
+ byte orig[] = new byte[128];
+ byte data[] = new byte[128];
+ ctx.random().nextBytes(iv);
+ ctx.random().nextBytes(orig);
+ CryptixAESEngine aes = new CryptixAESEngine(ctx);
+ aes.encrypt(orig, 0, data, 0, key, iv, data.length);
+ aes.decrypt(data, 0, data, 0, key, iv, data.length);
+ if (!DataHelper.eq(data,orig))
+ throw new RuntimeException("full D(E(orig)) != orig");
+ else
+ System.out.println("full D(E(orig)) == orig");
+ }
+ private static void testFake(I2PAppContext ctx) {
+ SessionKey key = ctx.keyGenerator().generateSessionKey();
+ SessionKey wrongKey = ctx.keyGenerator().generateSessionKey();
+ byte iv[] = new byte[16];
+ byte orig[] = new byte[128];
+ byte encrypted[] = new byte[128];
+ byte decrypted[] = new byte[128];
+ ctx.random().nextBytes(iv);
+ ctx.random().nextBytes(orig);
+ CryptixAESEngine aes = new CryptixAESEngine(ctx);
+ aes.encrypt(orig, 0, encrypted, 0, key, iv, orig.length);
+ aes.decrypt(encrypted, 0, decrypted, 0, wrongKey, iv, encrypted.length);
+ if (DataHelper.eq(decrypted,orig))
+ throw new RuntimeException("full D(E(orig)) == orig when we used the wrong key!");
+ else
+ System.out.println("full D(E(orig)) != orig when we used the wrong key");
+ }
+ private static void testNull(I2PAppContext ctx) {
+ SessionKey key = ctx.keyGenerator().generateSessionKey();
+ SessionKey wrongKey = ctx.keyGenerator().generateSessionKey();
+ byte iv[] = new byte[16];
+ byte orig[] = new byte[128];
+ byte encrypted[] = new byte[128];
+ byte decrypted[] = new byte[128];
+ ctx.random().nextBytes(iv);
+ ctx.random().nextBytes(orig);
+ CryptixAESEngine aes = new CryptixAESEngine(ctx);
+ aes.encrypt(orig, 0, encrypted, 0, key, iv, orig.length);
+ try {
+ aes.decrypt(null, 0, null, 0, wrongKey, iv, encrypted.length);
+ } catch (IllegalArgumentException iae) {
+ return;
+ }
+
+ throw new RuntimeException("full D(E(orig)) didn't fail when we used null!");
+ }
+ private static void testEDBlock(I2PAppContext ctx) {
+ SessionKey key = ctx.keyGenerator().generateSessionKey();
+ byte iv[] = new byte[16];
+ byte orig[] = new byte[16];
+ byte encrypted[] = new byte[16];
+ byte decrypted[] = new byte[16];
+ ctx.random().nextBytes(iv);
+ ctx.random().nextBytes(orig);
+ CryptixAESEngine aes = new CryptixAESEngine(ctx);
+ aes.encryptBlock(orig, 0, key, encrypted, 0);
+ aes.decryptBlock(encrypted, 0, key, decrypted, 0);
+ if (!DataHelper.eq(decrypted,orig))
+ throw new RuntimeException("block D(E(orig)) != orig");
+ else
+ System.out.println("block D(E(orig)) == orig");
+ }
+ private static void testEDBlock2(I2PAppContext ctx) {
+ SessionKey key = ctx.keyGenerator().generateSessionKey();
+ byte iv[] = new byte[16];
+ byte orig[] = new byte[16];
+ byte data[] = new byte[16];
+ ctx.random().nextBytes(iv);
+ ctx.random().nextBytes(orig);
+ CryptixAESEngine aes = new CryptixAESEngine(ctx);
+ aes.encryptBlock(orig, 0, key, data, 0);
+ aes.decryptBlock(data, 0, key, data, 0);
+ if (!DataHelper.eq(data,orig))
+ throw new RuntimeException("block D(E(orig)) != orig");
+ else
+ System.out.println("block D(E(orig)) == orig");
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/crypto/CryptixAESKeyCache.java b/src/net/i2p/crypto/CryptixAESKeyCache.java
new file mode 100644
index 0000000..43513c1
--- /dev/null
+++ b/src/net/i2p/crypto/CryptixAESKeyCache.java
@@ -0,0 +1,70 @@
+package net.i2p.crypto;
+
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Cache the objects used in CryptixRijndael_Algorithm.makeKey to reduce
+ * memory churn. The KeyCacheEntry should be held onto as long as the
+ * data referenced in it is needed (which often is only one or two lines
+ * of code)
+ *
+ */
+public final class CryptixAESKeyCache {
+ private List _availableKeys;
+
+ private static final int KEYSIZE = 32; // 256bit AES
+ private static final int BLOCKSIZE = 16;
+ private static final int ROUNDS = CryptixRijndael_Algorithm.getRounds(KEYSIZE, BLOCKSIZE);
+ private static final int BC = BLOCKSIZE / 4;
+ private static final int KC = KEYSIZE / 4;
+
+ private static final int MAX_KEYS = 64;
+
+ public CryptixAESKeyCache() {
+ _availableKeys = new ArrayList(MAX_KEYS);
+ }
+
+ /**
+ * Get the next available structure, either from the cache or a brand new one
+ *
+ */
+ public final KeyCacheEntry acquireKey() {
+ synchronized (_availableKeys) {
+ if (_availableKeys.size() > 0)
+ return (KeyCacheEntry)_availableKeys.remove(0);
+ }
+ return createNew();
+ }
+
+ /**
+ * Put this structure back onto the available cache for reuse
+ *
+ */
+ public final void releaseKey(KeyCacheEntry key) {
+ synchronized (_availableKeys) {
+ if (_availableKeys.size() < MAX_KEYS)
+ _availableKeys.add(key);
+ }
+ }
+
+ public static final KeyCacheEntry createNew() {
+ KeyCacheEntry e = new KeyCacheEntry();
+ e.Ke = new int[ROUNDS + 1][BC]; // encryption round keys
+ e.Kd = new int[ROUNDS + 1][BC]; // decryption round keys
+ e.tk = new int[KC];
+ e.key = new Object[] { e.Ke, e.Kd };
+ return e;
+ }
+
+ /**
+ * all the data alloc'ed in a makeKey call
+ */
+ public static final class KeyCacheEntry {
+ int[][] Ke;
+ int[][] Kd;
+ int[] tk;
+
+ Object[] key;
+ }
+}
diff --git a/src/net/i2p/crypto/CryptixRijndael_Algorithm.java b/src/net/i2p/crypto/CryptixRijndael_Algorithm.java
new file mode 100644
index 0000000..58444d0
--- /dev/null
+++ b/src/net/i2p/crypto/CryptixRijndael_Algorithm.java
@@ -0,0 +1,902 @@
+/*
+ * Copyright (c) 1997, 1998 Systemics Ltd on behalf of
+ * the Cryptix Development Team. All rights reserved.
+ */
+package net.i2p.crypto;
+
+import java.io.PrintWriter;
+import java.security.InvalidKeyException;
+
+import net.i2p.util.Clock;
+
+//...........................................................................
+/**
+ * Rijndael --pronounced Reindaal-- is a variable block-size (128-, 192- and
+ * 256-bit), variable key-size (128-, 192- and 256-bit) symmetric cipher.
All rights reserved.
+ *
+ */
+public final class SHA1 extends MessageDigest implements Cloneable {
+
+ /**
+ * This implementation returns a fixed-size digest.
+ */
+ private static final int HASH_LENGTH = 20; // bytes == 160 bits
+
+ /**
+ * Private context for incomplete blocks and padding bytes.
+ * INVARIANT: padding must be in 0..63.
+ * When the padding reaches 64, a new block is computed, and
+ * the 56 last bytes are kept in the padding history.
+ */
+ private byte[] pad;
+ private int padding;
+
+ /**
+ * Private contextual byte count, sent in the next block,
+ * after the ending padding block.
+ */
+ private long bytes;
+
+ /**
+ * Private context that contains the current digest key.
+ */
+ private int hA, hB, hC, hD, hE;
+
+ /**
+ * Creates a SHA1 object with default initial state.
+ */
+ public SHA1() {
+ super("SHA-1");
+ pad = new byte[64];
+ init();
+ }
+
+ /**
+ * Clones this object.
+ */
+ public Object clone() throws CloneNotSupportedException {
+ SHA1 that = (SHA1)super.clone();
+ that.pad = (byte[])this.pad.clone();
+ return that;
+ }
+
+ /**
+ * Returns the digest length in bytes.
+ *
+ * Can be used to allocate your own output buffer when
+ * computing multiple digests.
+ *
+ * Overrides the protected abstract method of
+ *
+ *
+ * http://csrc.ncsl.nist.gov/CryptoToolkit/Hash.html
+ *
+ *
+ * http://www.itl.nist.gov/div897/pubs/fip180-1.htm
+ * John Wiley & Sons, 1996java.security.MessageDigestSpi
.
+ * @return the digest length in bytes.
+ */
+ public int engineGetDigestLength() {
+ return HASH_LENGTH;
+ }
+
+ /**
+ * Reset athen initialize the digest context.
+ *
+ * Overrides the protected abstract method of
+ * java.security.MessageDigestSpi
.
+ */
+ protected void engineReset() {
+ int i = 60;
+ do {
+ pad[i ] = (byte)0x00;
+ pad[i + 1] = (byte)0x00;
+ pad[i + 2] = (byte)0x00;
+ pad[i + 3] = (byte)0x00;
+ } while ((i -= 4) >= 0);
+ padding = 0;
+ bytes = 0;
+ init();
+ }
+
+ /**
+ * Initialize the digest context.
+ */
+ protected void init() {
+ hA = 0x67452301;
+ hB = 0xefcdab89;
+ hC = 0x98badcfe;
+ hD = 0x10325476;
+ hE = 0xc3d2e1f0;
+ }
+
+ /**
+ * Updates the digest using the specified byte.
+ * Requires internal buffering, and may be slow.
+ *
+ * Overrides the protected abstract method of
+ * java.security.MessageDigestSpi.
+ * @param input the byte to use for the update.
+ */
+ public void engineUpdate(byte input) {
+ bytes++;
+ if (padding < 63) {
+ pad[padding++] = input;
+ return;
+ }
+ pad[63] = input;
+ computeBlock(pad, 0);
+ padding = 0;
+ }
+
+ /**
+ * Updates the digest using the specified array of bytes,
+ * starting at the specified offset.
+ *
+ * Input length can be any size. May require internal buffering,
+ * if input blocks are not multiple of 64 bytes.
+ *
+ * Overrides the protected abstract method of
+ * java.security.MessageDigestSpi.
+ * @param input the array of bytes to use for the update.
+ * @param offset the offset to start from in the array of bytes.
+ * @param len the number of bytes to use, starting at offset.
+ */
+ public void engineUpdate(byte[] input, int offset, int len) {
+ if (offset >= 0 && len >= 0 && offset + len <= input.length) {
+ bytes += len;
+ /* Terminate the previous block. */
+ int padlen = 64 - padding;
+ if (padding > 0 && len >= padlen) {
+ System.arraycopy(input, offset, pad, padding, padlen);
+ computeBlock(pad, 0);
+ padding = 0;
+ offset += padlen;
+ len -= padlen;
+ }
+ /* Loop on large sets of complete blocks. */
+ while (len >= 512) {
+ computeBlock(input, offset);
+ computeBlock(input, offset + 64);
+ computeBlock(input, offset + 128);
+ computeBlock(input, offset + 192);
+ computeBlock(input, offset + 256);
+ computeBlock(input, offset + 320);
+ computeBlock(input, offset + 384);
+ computeBlock(input, offset + 448);
+ offset += 512;
+ len -= 512;
+ }
+ /* Loop on remaining complete blocks. */
+ while (len >= 64) {
+ computeBlock(input, offset);
+ offset += 64;
+ len -= 64;
+ }
+ /* remaining bytes kept for next block. */
+ if (len > 0) {
+ System.arraycopy(input, offset, pad, padding, len);
+ padding += len;
+ }
+ return;
+ }
+ throw new ArrayIndexOutOfBoundsException(offset);
+ }
+
+ /**
+ * Completes the hash computation by performing final operations
+ * such as padding. Computes the final hash and returns the final
+ * value as a byte[20] array. Once engineDigest has been called,
+ * the engine will be automatically reset as specified in the
+ * JavaSecurity MessageDigest specification.
+ *
+ * For faster operations with multiple digests, allocate your own
+ * array and use engineDigest(byte[], int offset, int len).
+ *
+ * Overrides the protected abstract method of
+ * java.security.MessageDigestSpi.
+ * @return the length of the digest stored in the output buffer.
+ */
+ public byte[] engineDigest() {
+ try {
+ final byte hashvalue[] = new byte[HASH_LENGTH];
+ engineDigest(hashvalue, 0, HASH_LENGTH);
+ return hashvalue;
+ } catch (DigestException e) {
+ return null;
+ }
+ }
+
+ /**
+ * Completes the hash computation by performing final operations
+ * such as padding. Once engineDigest has been called, the engine
+ * will be automatically reset (see engineReset).
+ *
+ * Overrides the protected abstract method of
+ * java.security.MessageDigestSpi.
+ * @param hashvalue the output buffer in which to store the digest.
+ * @param offset offset to start from in the output buffer
+ * @param len number of bytes within buf allotted for the digest.
+ * Both this default implementation and the SUN provider
+ * do not return partial digests. The presence of this
+ * parameter is solely for consistency in our API's.
+ * If the value of this parameter is less than the
+ * actual digest length, the method will throw a
+ * DigestException. This parameter is ignored if its
+ * value is greater than or equal to the actual digest
+ * length.
+ * @return the length of the digest stored in the output buffer.
+ */
+ public int engineDigest(byte[] hashvalue, int offset, final int len)
+ throws DigestException {
+ if (len >= HASH_LENGTH) {
+ if (hashvalue.length - offset >= HASH_LENGTH) {
+ /* Flush the trailing bytes, adding padding bytes into last
+ * blocks. */
+ int i;
+ /* Add padding null bytes but replace the last 8 padding bytes
+ * by the little-endian 64-bit digested message bit-length. */
+ pad[i = padding] = (byte)0x80; /* required 1st padding byte */
+ /* Check if 8 bytes available in pad to store the total
+ * message size */
+ switch (i) { /* INVARIANT: i must be in [0..63] */
+ case 52: pad[53] = (byte)0x00; /* no break; falls thru */
+ case 53: pad[54] = (byte)0x00; /* no break; falls thru */
+ case 54: pad[55] = (byte)0x00; /* no break; falls thru */
+ case 55: break;
+ case 56: pad[57] = (byte)0x00; /* no break; falls thru */
+ case 57: pad[58] = (byte)0x00; /* no break; falls thru */
+ case 58: pad[59] = (byte)0x00; /* no break; falls thru */
+ case 59: pad[60] = (byte)0x00; /* no break; falls thru */
+ case 60: pad[61] = (byte)0x00; /* no break; falls thru */
+ case 61: pad[62] = (byte)0x00; /* no break; falls thru */
+ case 62: pad[63] = (byte)0x00; /* no break; falls thru */
+ case 63:
+ computeBlock(pad, 0);
+ /* Clear the 56 first bytes of pad[]. */
+ i = 52;
+ do {
+ pad[i ] = (byte)0x00;
+ pad[i + 1] = (byte)0x00;
+ pad[i + 2] = (byte)0x00;
+ pad[i + 3] = (byte)0x00;
+ } while ((i -= 4) >= 0);
+ break;
+ default:
+ /* Clear the rest of 56 first bytes of pad[]. */
+ switch (i & 3) {
+ case 3: i++;
+ break;
+ case 2: pad[(i += 2) - 1] = (byte)0x00;
+ break;
+ case 1: pad[(i += 3) - 2] = (byte)0x00;
+ pad[ i - 1] = (byte)0x00;
+ break;
+ case 0: pad[(i += 4) - 3] = (byte)0x00;
+ pad[ i - 2] = (byte)0x00;
+ pad[ i - 1] = (byte)0x00;
+ }
+ do {
+ pad[i ] = (byte)0x00;
+ pad[i + 1] = (byte)0x00;
+ pad[i + 2] = (byte)0x00;
+ pad[i + 3] = (byte)0x00;
+ } while ((i += 4) < 56);
+ }
+ /* Convert the message size from bytes to big-endian bits. */
+ pad[56] = (byte)((i = (int)(bytes >>> 29)) >> 24);
+ pad[57] = (byte)(i >>> 16);
+ pad[58] = (byte)(i >>> 8);
+ pad[59] = (byte)i;
+ pad[60] = (byte)((i = (int)bytes << 3) >> 24);
+ pad[61] = (byte)(i >>> 16);
+ pad[62] = (byte)(i >>> 8);
+ pad[63] = (byte)i;
+ computeBlock(pad, 0);
+ /* Return the computed digest in big-endian byte order. */
+ hashvalue[offset ] = (byte)((i = hA) >>> 24);
+ hashvalue[offset + 1] = (byte)(i >>> 16);
+ hashvalue[offset + 2] = (byte)(i >>> 8);
+ hashvalue[offset + 3] = (byte)i;
+ hashvalue[offset + 4] = (byte)((i = hB) >>> 24);
+ hashvalue[offset += 5] = (byte)(i >>> 16);
+ hashvalue[offset + 1] = (byte)(i >>> 8);
+ hashvalue[offset + 2] = (byte)i;
+ hashvalue[offset + 3] = (byte)((i = hC) >>> 24);
+ hashvalue[offset + 4] = (byte)(i >>> 16);
+ hashvalue[offset += 5] = (byte)(i >>> 8);
+ hashvalue[offset + 1] = (byte)i;
+ hashvalue[offset + 2] = (byte)((i = hD) >>> 24);
+ hashvalue[offset + 3] = (byte)(i >>> 16);
+ hashvalue[offset + 4] = (byte)(i >>> 8);
+ hashvalue[offset += 5] = (byte)i;
+ hashvalue[offset + 1] = (byte)((i = hE) >>> 24);
+ hashvalue[offset + 2] = (byte)(i >>> 16);
+ hashvalue[offset + 3] = (byte)(i >>> 8);
+ hashvalue[offset + 4] = (byte)i;
+ engineReset(); /* clear the evidence */
+ return HASH_LENGTH;
+ }
+ throw new DigestException(
+ "insufficient space in output buffer to store the digest");
+ }
+ throw new DigestException("partial digests not returned");
+ }
+
+ /**
+ * Updates the digest using the specified array of bytes,
+ * starting at the specified offset, but an implied length
+ * of exactly 64 bytes.
+ *
+ * Requires no internal buffering, but assumes a fixed input size,
+ * in which the required padding bytes may have been added.
+ *
+ * @param input the array of bytes to use for the update.
+ * @param offset the offset to start from in the array of bytes.
+ */
+ private void computeBlock(final byte[] input, int offset) {
+ /* Local temporary work variables for intermediate digests. */
+ int a, b, c, d, e;
+ /* Cache the input block into the local working set of 32-bit
+ * values, in big-endian byte order. Be careful when
+ * widening bytes or integers due to sign extension! */
+ int i00, i01, i02, i03, i04, i05, i06, i07,
+ i08, i09, i10, i11, i12, i13, i14, i15;
+ /* Use hash schedule function Ch (rounds 0..19):
+ * Ch(x,y,z) = (x & y) ^ (~x & z) = (x & (y ^ z)) ^ z,
+ * and K00 = .... = K19 = 0x5a827999. */
+ /* First pass, on big endian input (rounds 0..15). */
+ e = hE
+ + (((a = hA) << 5) | (a >>> 27)) + 0x5a827999 // K00
+ + (((b = hB) & ((c = hC) ^ (d = hD))) ^ d) // Ch(b,c,d)
+ + (i00 = input[offset ] << 24
+ | (input[offset + 1] & 0xff) << 16
+ | (input[offset + 2] & 0xff) << 8
+ | (input[offset + 3] & 0xff)); // W00
+ d += ((e << 5) | (e >>> 27)) + 0x5a827999 // K01
+ + ((a & ((b = (b << 30) | (b >>> 2)) ^ c)) ^ c) // Ch(a,b,c)
+ + (i01 = input[offset + 4] << 24
+ | (input[offset += 5] & 0xff) << 16
+ | (input[offset + 1] & 0xff) << 8
+ | (input[offset + 2] & 0xff)); // W01
+ c += ((d << 5) | (d >>> 27)) + 0x5a827999 // K02
+ + ((e & ((a = (a << 30) | (a >>> 2)) ^ b)) ^ b) // Ch(e,a,b)
+ + (i02 = input[offset + 3] << 24
+ | (input[offset + 4] & 0xff) << 16
+ | (input[offset += 5] & 0xff) << 8
+ | (input[offset + 1] & 0xff)); // W02
+ b += ((c << 5) | (c >>> 27)) + 0x5a827999 // K03
+ + ((d & ((e = (e << 30) | (e >>> 2)) ^ a)) ^ a) // Ch(d,e,a)
+ + (i03 = input[offset + 2] << 24
+ | (input[offset + 3] & 0xff) << 16
+ | (input[offset + 4] & 0xff) << 8
+ | (input[offset += 5] & 0xff)); // W03
+ a += ((b << 5) | (b >>> 27)) + 0x5a827999 // K04
+ + ((c & ((d = (d << 30) | (d >>> 2)) ^ e)) ^ e) // Ch(c,d,e)
+ + (i04 = input[offset + 1] << 24
+ | (input[offset + 2] & 0xff) << 16
+ | (input[offset + 3] & 0xff) << 8
+ | (input[offset + 4] & 0xff)); // W04
+ e += ((a << 5) | (a >>> 27)) + 0x5a827999 // K05
+ + ((b & ((c = (c << 30) | (c >>> 2)) ^ d)) ^ d) // Ch(b,c,d)
+ + (i05 = input[offset += 5] << 24
+ | (input[offset + 1] & 0xff) << 16
+ | (input[offset + 2] & 0xff) << 8
+ | (input[offset + 3] & 0xff)); // W05
+ d += ((e << 5) | (e >>> 27)) + 0x5a827999 // K06
+ + ((a & ((b = (b << 30) | (b >>> 2)) ^ c)) ^ c) // Ch(a,b,c)
+ + (i06 = input[offset + 4] << 24
+ | (input[offset += 5] & 0xff) << 16
+ | (input[offset + 1] & 0xff) << 8
+ | (input[offset + 2] & 0xff)); // W06
+ c += ((d << 5) | (d >>> 27)) + 0x5a827999 // K07
+ + ((e & ((a = (a << 30) | (a >>> 2)) ^ b)) ^ b) // Ch(e,a,b)
+ + (i07 = input[offset + 3] << 24
+ | (input[offset + 4] & 0xff) << 16
+ | (input[offset += 5] & 0xff) << 8
+ | (input[offset + 1] & 0xff)); // W07
+ b += ((c << 5) | (c >>> 27)) + 0x5a827999 // K08
+ + ((d & ((e = (e << 30) | (e >>> 2)) ^ a)) ^ a) // Ch(d,e,a)
+ + (i08 = input[offset + 2] << 24
+ | (input[offset + 3] & 0xff) << 16
+ | (input[offset + 4] & 0xff) << 8
+ | (input[offset += 5] & 0xff)); // W08
+ a += ((b << 5) | (b >>> 27)) + 0x5a827999 // K09
+ + ((c & ((d = (d << 30) | (d >>> 2)) ^ e)) ^ e) // Ch(c,d,e)
+ + (i09 = input[offset + 1] << 24
+ | (input[offset + 2] & 0xff) << 16
+ | (input[offset + 3] & 0xff) << 8
+ | (input[offset + 4] & 0xff)); // W09
+ e += ((a << 5) | (a >>> 27)) + 0x5a827999 // K10
+ + ((b & ((c = (c << 30) | (c >>> 2)) ^ d)) ^ d) // Ch(b,c,d)
+ + (i10 = input[offset += 5] << 24
+ | (input[offset + 1] & 0xff) << 16
+ | (input[offset + 2] & 0xff) << 8
+ | (input[offset + 3] & 0xff)); // W10
+ d += ((e << 5) | (e >>> 27)) + 0x5a827999 // K11
+ + ((a & ((b = (b << 30) | (b >>> 2)) ^ c)) ^ c) // Ch(a,b,c)
+ + (i11 = input[offset + 4] << 24
+ | (input[offset += 5] & 0xff) << 16
+ | (input[offset + 1] & 0xff) << 8
+ | (input[offset + 2] & 0xff)); // W11
+ c += ((d << 5) | (d >>> 27)) + 0x5a827999 // K12
+ + ((e & ((a = (a << 30) | (a >>> 2)) ^ b)) ^ b) // Ch(e,a,b)
+ + (i12 = input[offset + 3] << 24
+ | (input[offset + 4] & 0xff) << 16
+ | (input[offset += 5] & 0xff) << 8
+ | (input[offset + 1] & 0xff)); // W12
+ b += ((c << 5) | (c >>> 27)) + 0x5a827999 // K13
+ + ((d & ((e = (e << 30) | (e >>> 2)) ^ a)) ^ a) // Ch(d,e,a)
+ + (i13 = input[offset + 2] << 24
+ | (input[offset + 3] & 0xff) << 16
+ | (input[offset + 4] & 0xff) << 8
+ | (input[offset += 5] & 0xff)); // W13
+ a += ((b << 5) | (b >>> 27)) + 0x5a827999 // K14
+ + ((c & ((d = (d << 30) | (d >>> 2)) ^ e)) ^ e) // Ch(c,d,e)
+ + (i14 = input[offset + 1] << 24
+ | (input[offset + 2] & 0xff) << 16
+ | (input[offset + 3] & 0xff) << 8
+ | (input[offset + 4] & 0xff)); // W14
+ e += ((a << 5) | (a >>> 27)) + 0x5a827999 // K15
+ + ((b & ((c = (c << 30) | (c >>> 2)) ^ d)) ^ d) // Ch(b,c,d)
+ + (i15 = input[offset += 5] << 24
+ | (input[offset + 1] & 0xff) << 16
+ | (input[offset + 2] & 0xff) << 8
+ | (input[offset + 3] & 0xff)); // W15
+ /* Second pass, on scheduled input (rounds 16..31). */
+ d += ((e << 5) | (e >>> 27)) + 0x5a827999 // K16
+ + ((a & ((b = (b << 30) | (b >>> 2)) ^ c)) ^ c) // Ch(a,b,c)
+ + (i00 = ((i00 ^= i02 ^ i08 ^ i13) << 1) | (i00 >>> 31)); // W16
+ c += ((d << 5) | (d >>> 27)) + 0x5a827999 // K17
+ + ((e & ((a = (a << 30) | (a >>> 2)) ^ b)) ^ b) // Ch(e,a,b)
+ + (i01 = ((i01 ^= i03 ^ i09 ^ i14) << 1) | (i01 >>> 31)); // W17
+ b += ((c << 5) | (c >>> 27)) + 0x5a827999 // K18
+ + ((d & ((e = (e << 30) | (e >>> 2)) ^ a)) ^ a) // Ch(d,e,a)
+ + (i02 = ((i02 ^= i04 ^ i10 ^ i15) << 1) | (i02 >>> 31)); // W18
+ a += ((b << 5) | (b >>> 27)) + 0x5a827999 // K19
+ + ((c & ((d = (d << 30) | (d >>> 2)) ^ e)) ^ e) // Ch(c,d,e)
+ + (i03 = ((i03 ^= i05 ^ i11 ^ i00) << 1) | (i03 >>> 31)); // W19
+ /* Use hash schedule function Parity (rounds 20..39):
+ * Parity(x,y,z) = x ^ y ^ z,
+ * and K20 = .... = K39 = 0x6ed9eba1. */
+ e += ((a << 5) | (a >>> 27)) + 0x6ed9eba1 // K20
+ + (b ^ (c = (c << 30) | (c >>> 2)) ^ d) // Parity(b,c,d)
+ + (i04 = ((i04 ^= i06 ^ i12 ^ i01) << 1) | (i04 >>> 31)); // W20
+ d += ((e << 5) | (e >>> 27)) + 0x6ed9eba1 // K21
+ + (a ^ (b = (b << 30) | (b >>> 2)) ^ c) // Parity(a,b,c)
+ + (i05 = ((i05 ^= i07 ^ i13 ^ i02) << 1) | (i05 >>> 31)); // W21
+ c += ((d << 5) | (d >>> 27)) + 0x6ed9eba1 // K22
+ + (e ^ (a = (a << 30) | (a >>> 2)) ^ b) // Parity(e,a,b)
+ + (i06 = ((i06 ^= i08 ^ i14 ^ i03) << 1) | (i06 >>> 31)); // W22
+ b += ((c << 5) | (c >>> 27)) + 0x6ed9eba1 // K23
+ + (d ^ (e = (e << 30) | (e >>> 2)) ^ a) // Parity(d,e,a)
+ + (i07 = ((i07 ^= i09 ^ i15 ^ i04) << 1) | (i07 >>> 31)); // W23
+ a += ((b << 5) | (b >>> 27)) + 0x6ed9eba1 // K24
+ + (c ^ (d = (d << 30) | (d >>> 2)) ^ e) // Parity(c,d,e)
+ + (i08 = ((i08 ^= i10 ^ i00 ^ i05) << 1) | (i08 >>> 31)); // W24
+ e += ((a << 5) | (a >>> 27)) + 0x6ed9eba1 // K25
+ + (b ^ (c = (c << 30) | (c >>> 2)) ^ d) // Parity(b,c,d)
+ + (i09 = ((i09 ^= i11 ^ i01 ^ i06) << 1) | (i09 >>> 31)); // W25
+ d += ((e << 5) | (e >>> 27)) + 0x6ed9eba1 // K26
+ + (a ^ (b = (b << 30) | (b >>> 2)) ^ c) // Parity(a,b,c)
+ + (i10 = ((i10 ^= i12 ^ i02 ^ i07) << 1) | (i10 >>> 31)); // W26
+ c += ((d << 5) | (d >>> 27)) + 0x6ed9eba1 // K27
+ + (e ^ (a = (a << 30) | (a >>> 2)) ^ b) // Parity(e,a,b)
+ + (i11 = ((i11 ^= i13 ^ i03 ^ i08) << 1) | (i11 >>> 31)); // W27
+ b += ((c << 5) | (c >>> 27)) + 0x6ed9eba1 // K28
+ + (d ^ (e = (e << 30) | (e >>> 2)) ^ a) // Parity(d,e,a)
+ + (i12 = ((i12 ^= i14 ^ i04 ^ i09) << 1) | (i12 >>> 31)); // W28
+ a += ((b << 5) | (b >>> 27)) + 0x6ed9eba1 // K29
+ + (c ^ (d = (d << 30) | (d >>> 2)) ^ e) // Parity(c,d,e)
+ + (i13 = ((i13 ^= i15 ^ i05 ^ i10) << 1) | (i13 >>> 31)); // W29
+ e += ((a << 5) | (a >>> 27)) + 0x6ed9eba1 // K30
+ + (b ^ (c = (c << 30) | (c >>> 2)) ^ d) // Parity(b,c,d)
+ + (i14 = ((i14 ^= i00 ^ i06 ^ i11) << 1) | (i14 >>> 31)); // W30
+ d += ((e << 5) | (e >>> 27)) + 0x6ed9eba1 // K31
+ + (a ^ (b = (b << 30) | (b >>> 2)) ^ c) // Parity(a,b,c)
+ + (i15 = ((i15 ^= i01 ^ i07 ^ i12) << 1) | (i15 >>> 31)); // W31
+ /* Third pass, on scheduled input (rounds 32..47). */
+ c += ((d << 5) | (d >>> 27)) + 0x6ed9eba1 // K32
+ + (e ^ (a = (a << 30) | (a >>> 2)) ^ b) // Parity(e,a,b)
+ + (i00 = ((i00 ^= i02 ^ i08 ^ i13) << 1) | (i00 >>> 31)); // W32
+ b += ((c << 5) | (c >>> 27)) + 0x6ed9eba1 // K33
+ + (d ^ (e = (e << 30) | (e >>> 2)) ^ a) // Parity(d,e,a)
+ + (i01 = ((i01 ^= i03 ^ i09 ^ i14) << 1) | (i01 >>> 31)); // W33
+ a += ((b << 5) | (b >>> 27)) + 0x6ed9eba1 // K34
+ + (c ^ (d = (d << 30) | (d >>> 2)) ^ e) // Parity(c,d,e)
+ + (i02 = ((i02 ^= i04 ^ i10 ^ i15) << 1) | (i02 >>> 31)); // W34
+ e += ((a << 5) | (a >>> 27)) + 0x6ed9eba1 // K35
+ + (b ^ (c = (c << 30) | (c >>> 2)) ^ d) // Parity(b,c,d)
+ + (i03 = ((i03 ^= i05 ^ i11 ^ i00) << 1) | (i03 >>> 31)); // W35
+ d += ((e << 5) | (e >>> 27)) + 0x6ed9eba1 // K36
+ + (a ^ (b = (b << 30) | (b >>> 2)) ^ c) // Parity(a,b,c)
+ + (i04 = ((i04 ^= i06 ^ i12 ^ i01) << 1) | (i04 >>> 31)); // W36
+ c += ((d << 5) | (d >>> 27)) + 0x6ed9eba1 // K37
+ + (e ^ (a = (a << 30) | (a >>> 2)) ^ b) // Parity(e,a,b)
+ + (i05 = ((i05 ^= i07 ^ i13 ^ i02) << 1) | (i05 >>> 31)); // W37
+ b += ((c << 5) | (c >>> 27)) + 0x6ed9eba1 // K38
+ + (d ^ (e = (e << 30) | (e >>> 2)) ^ a) // Parity(d,e,a)
+ + (i06 = ((i06 ^= i08 ^ i14 ^ i03) << 1) | (i06 >>> 31)); // W38
+ a += ((b << 5) | (b >>> 27)) + 0x6ed9eba1 // K39
+ + (c ^ (d = (d << 30) | (d >>> 2)) ^ e) // Parity(c,d,e)
+ + (i07 = ((i07 ^= i09 ^ i15 ^ i04) << 1) | (i07 >>> 31)); // W39
+ /* Use hash schedule function Maj (rounds 40..59):
+ * Maj(x,y,z) = (x&y) ^ (x&z) ^ (y&z) = (x & y) | ((x | y) & z),
+ * and K40 = .... = K59 = 0x8f1bbcdc. */
+ e += ((a << 5) | (a >>> 27)) + 0x8f1bbcdc // K40
+ + ((b & (c = (c << 30) | (c >>> 2))) | ((b | c) & d)) // Maj(b,c,d)
+ + (i08 = ((i08 ^= i10 ^ i00 ^ i05) << 1) | (i08 >>> 31)); // W40
+ d += ((e << 5) | (e >>> 27)) + 0x8f1bbcdc // K41
+ + ((a & (b = (b << 30) | (b >>> 2))) | ((a | b) & c)) // Maj(a,b,c)
+ + (i09 = ((i09 ^= i11 ^ i01 ^ i06) << 1) | (i09 >>> 31)); // W41
+ c += ((d << 5) | (d >>> 27)) + 0x8f1bbcdc // K42
+ + ((e & (a = (a << 30) | (a >>> 2))) | ((e | a) & b)) // Maj(e,a,b)
+ + (i10 = ((i10 ^= i12 ^ i02 ^ i07) << 1) | (i10 >>> 31)); // W42
+ b += ((c << 5) | (c >>> 27)) + 0x8f1bbcdc // K43
+ + ((d & (e = (e << 30) | (e >>> 2))) | ((d | e) & a)) // Maj(d,e,a)
+ + (i11 = ((i11 ^= i13 ^ i03 ^ i08) << 1) | (i11 >>> 31)); // W43
+ a += ((b << 5) | (b >>> 27)) + 0x8f1bbcdc // K44
+ + ((c & (d = (d << 30) | (d >>> 2))) | ((c | d) & e)) // Maj(c,d,e)
+ + (i12 = ((i12 ^= i14 ^ i04 ^ i09) << 1) | (i12 >>> 31)); // W44
+ e += ((a << 5) | (a >>> 27)) + 0x8f1bbcdc // K45
+ + ((b & (c = (c << 30) | (c >>> 2))) | ((b | c) & d)) // Maj(b,c,d)
+ + (i13 = ((i13 ^= i15 ^ i05 ^ i10) << 1) | (i13 >>> 31)); // W45
+ d += ((e << 5) | (e >>> 27)) + 0x8f1bbcdc // K46
+ + ((a & (b = (b << 30) | (b >>> 2))) | ((a | b) & c)) // Maj(a,b,c)
+ + (i14 = ((i14 ^= i00 ^ i06 ^ i11) << 1) | (i14 >>> 31)); // W46
+ c += ((d << 5) | (d >>> 27)) + 0x8f1bbcdc // K47
+ + ((e & (a = (a << 30) | (a >>> 2))) | ((e | a) & b)) // Maj(e,a,b)
+ + (i15 = ((i15 ^= i01 ^ i07 ^ i12) << 1) | (i15 >>> 31)); // W47
+ /* Fourth pass, on scheduled input (rounds 48..63). */
+ b += ((c << 5) | (c >>> 27)) + 0x8f1bbcdc // K48
+ + ((d & (e = (e << 30) | (e >>> 2))) | ((d | e) & a)) // Maj(d,e,a)
+ + (i00 = ((i00 ^= i02 ^ i08 ^ i13) << 1) | (i00 >>> 31)); // W48
+ a += ((b << 5) | (b >>> 27)) + 0x8f1bbcdc // K49
+ + ((c & (d = (d << 30) | (d >>> 2))) | ((c | d) & e)) // Maj(c,d,e)
+ + (i01 = ((i01 ^= i03 ^ i09 ^ i14) << 1) | (i01 >>> 31)); // W49
+ e += ((a << 5) | (a >>> 27)) + 0x8f1bbcdc // K50
+ + ((b & (c = (c << 30) | (c >>> 2))) | ((b | c) & d)) // Maj(b,c,d)
+ + (i02 = ((i02 ^= i04 ^ i10 ^ i15) << 1) | (i02 >>> 31)); // W50
+ d += ((e << 5) | (e >>> 27)) + 0x8f1bbcdc // K51
+ + ((a & (b = (b << 30) | (b >>> 2))) | ((a | b) & c)) // Maj(a,b,c)
+ + (i03 = ((i03 ^= i05 ^ i11 ^ i00) << 1) | (i03 >>> 31)); // W51
+ c += ((d << 5) | (d >>> 27)) + 0x8f1bbcdc // K52
+ + ((e & (a = (a << 30) | (a >>> 2))) | ((e | a) & b)) // Maj(e,a,b)
+ + (i04 = ((i04 ^= i06 ^ i12 ^ i01) << 1) | (i04 >>> 31)); // W52
+ b += ((c << 5) | (c >>> 27)) + 0x8f1bbcdc // K53
+ + ((d & (e = (e << 30) | (e >>> 2))) | ((d | e) & a)) // Maj(d,e,a)
+ + (i05 = ((i05 ^= i07 ^ i13 ^ i02) << 1) | (i05 >>> 31)); // W53
+ a += ((b << 5) | (b >>> 27)) + 0x8f1bbcdc // K54
+ + ((c & (d = (d << 30) | (d >>> 2))) | ((c | d) & e)) // Maj(c,d,e)
+ + (i06 = ((i06 ^= i08 ^ i14 ^ i03) << 1) | (i06 >>> 31)); // W54
+ e += ((a << 5) | (a >>> 27)) + 0x8f1bbcdc // K55
+ + ((b & (c = (c << 30) | (c >>> 2))) | ((b | c) & d)) // Maj(b,c,d)
+ + (i07 = ((i07 ^= i09 ^ i15 ^ i04) << 1) | (i07 >>> 31)); // W55
+ d += ((e << 5) | (e >>> 27)) + 0x8f1bbcdc // K56
+ + ((a & (b = (b << 30) | (b >>> 2))) | ((a | b) & c)) // Maj(a,b,c)
+ + (i08 = ((i08 ^= i10 ^ i00 ^ i05) << 1) | (i08 >>> 31)); // W56
+ c += ((d << 5) | (d >>> 27)) + 0x8f1bbcdc // K57
+ + ((e & (a = (a << 30) | (a >>> 2))) | ((e | a) & b)) // Maj(e,a,b)
+ + (i09 = ((i09 ^= i11 ^ i01 ^ i06) << 1) | (i09 >>> 31)); // W57
+ b += ((c << 5) | (c >>> 27)) + 0x8f1bbcdc // K58
+ + ((d & (e = (e << 30) | (e >>> 2))) | ((d | e) & a)) // Maj(d,e,a)
+ + (i10 = ((i10 ^= i12 ^ i02 ^ i07) << 1) | (i10 >>> 31)); // W58
+ a += ((b << 5) | (b >>> 27)) + 0x8f1bbcdc // K59
+ + ((c & (d = (d << 30) | (d >>> 2))) | ((c | d) & e)) // Maj(c,d,e)
+ + (i11 = ((i11 ^= i13 ^ i03 ^ i08) << 1) | (i11 >>> 31)); // W59
+ /* Use hash schedule function Parity (rounds 60..79):
+ * Parity(x,y,z) = x ^ y ^ z,
+ * and K60 = .... = K79 = 0xca62c1d6. */
+ e += ((a << 5) | (a >>> 27)) + 0xca62c1d6 // K60
+ + (b ^ (c = (c << 30) | (c >>> 2)) ^ d) // Parity(b,c,d)
+ + (i12 = ((i12 ^= i14 ^ i04 ^ i09) << 1) | (i12 >>> 31)); // W60
+ d += ((e << 5) | (e >>> 27)) + 0xca62c1d6 // K61
+ + (a ^ (b = (b << 30) | (b >>> 2)) ^ c) // Parity(a,b,c)
+ + (i13 = ((i13 ^= i15 ^ i05 ^ i10) << 1) | (i13 >>> 31)); // W61
+ c += ((d << 5) | (d >>> 27)) + 0xca62c1d6 // K62
+ + (e ^ (a = (a << 30) | (a >>> 2)) ^ b) // Parity(e,a,b)
+ + (i14 = ((i14 ^= i00 ^ i06 ^ i11) << 1) | (i14 >>> 31)); // W62
+ b += ((c << 5) | (c >>> 27)) + 0xca62c1d6 // K63
+ + (d ^ (e = (e << 30) | (e >>> 2)) ^ a) // Parity(d,e,a)
+ + (i15 = ((i15 ^= i01 ^ i07 ^ i12) << 1) | (i15 >>> 31)); // W63
+ /* Fifth pass, on scheduled input (rounds 64..79). */
+ a += ((b << 5) | (b >>> 27)) + 0xca62c1d6 // K64
+ + (c ^ (d = (d << 30) | (d >>> 2)) ^ e) // Parity(c,d,e)
+ + (i00 = ((i00 ^= i02 ^ i08 ^ i13) << 1) | (i00 >>> 31)); // W64
+ e += ((a << 5) | (a >>> 27)) + 0xca62c1d6 // K65
+ + (b ^ (c = (c << 30) | (c >>> 2)) ^ d) // Parity(b,c,d)
+ + (i01 = ((i01 ^= i03 ^ i09 ^ i14) << 1) | (i01 >>> 31)); // W65
+ d += ((e << 5) | (e >>> 27)) + 0xca62c1d6 // K66
+ + (a ^ (b = (b << 30) | (b >>> 2)) ^ c) // Parity(a,b,c)
+ + (i02 = ((i02 ^= i04 ^ i10 ^ i15) << 1) | (i02 >>> 31)); // W66
+ c += ((d << 5) | (d >>> 27)) + 0xca62c1d6 // K67
+ + (e ^ (a = (a << 30) | (a >>> 2)) ^ b) // Parity(e,a,b)
+ + (i03 = ((i03 ^= i05 ^ i11 ^ i00) << 1) | (i03 >>> 31)); // W67
+ b += ((c << 5) | (c >>> 27)) + 0xca62c1d6 // K68
+ + (d ^ (e = (e << 30) | (e >>> 2)) ^ a) // Parity(d,e,a)
+ + (i04 = ((i04 ^= i06 ^ i12 ^ i01) << 1) | (i04 >>> 31)); // W68
+ a += ((b << 5) | (b >>> 27)) + 0xca62c1d6 // K69
+ + (c ^ (d = (d << 30) | (d >>> 2)) ^ e) // Parity(c,d,e)
+ + (i05 = ((i05 ^= i07 ^ i13 ^ i02) << 1) | (i05 >>> 31)); // W69
+ e += ((a << 5) | (a >>> 27)) + 0xca62c1d6 // K70
+ + (b ^ (c = (c << 30) | (c >>> 2)) ^ d) // Parity(b,c,d)
+ + (i06 = ((i06 ^= i08 ^ i14 ^ i03) << 1) | (i06 >>> 31)); // W70
+ d += ((e << 5) | (e >>> 27)) + 0xca62c1d6 // K71
+ + (a ^ (b = (b << 30) | (b >>> 2)) ^ c) // Parity(a,b,c)
+ + (i07 = ((i07 ^= i09 ^ i15 ^ i04) << 1) | (i07 >>> 31)); // W71
+ c += ((d << 5) | (d >>> 27)) + 0xca62c1d6 // K72
+ + (e ^ (a = (a << 30) | (a >>> 2)) ^ b) // Parity(e,a,b)
+ + (i08 = ((i08 ^= i10 ^ i00 ^ i05) << 1) | (i08 >>> 31)); // W72
+ b += ((c << 5) | (c >>> 27)) + 0xca62c1d6 // K73
+ + (d ^ (e = (e << 30) | (e >>> 2)) ^ a) // Parity(d,e,a)
+ + (i09 = ((i09 ^= i11 ^ i01 ^ i06) << 1) | (i09 >>> 31)); // W73
+ a += ((b << 5) | (b >>> 27)) + 0xca62c1d6 // K74
+ + (c ^ (d = (d << 30) | (d >>> 2)) ^ e) // Parity(c,d,e)
+ + (i10 = ((i10 ^= i12 ^ i02 ^ i07) << 1) | (i10 >>> 31)); // W74
+ e += ((a << 5) | (a >>> 27)) + 0xca62c1d6 // K75
+ + (b ^ (c = (c << 30) | (c >>> 2)) ^ d) // Parity(b,c,d)
+ + (i11 = ((i11 ^= i13 ^ i03 ^ i08) << 1) | (i11 >>> 31)); // W75
+ d += ((e << 5) | (e >>> 27)) + 0xca62c1d6 // K76
+ + (a ^ (b = (b << 30) | (b >>> 2)) ^ c) // Parity(a,b,c)
+ + (i12 = ((i12 ^= i14 ^ i04 ^ i09) << 1) | (i12 >>> 31)); // W76
+ c += ((d << 5) | (d >>> 27)) + 0xca62c1d6 // K77
+ + (e ^ (a = (a << 30) | (a >>> 2)) ^ b) // Parity(e,a,b)
+ + (i13 = ((i13 ^= i15 ^ i05 ^ i10) << 1) | (i13 >>> 31)); // W77
+ /* Terminate the last two rounds of fifth pass,
+ * feeding the final digest on the fly. */
+ hB +=
+ b += ((c << 5) | (c >>> 27)) + 0xca62c1d6 // K78
+ + (d ^ (e = (e << 30) | (e >>> 2)) ^ a) // Parity(d,e,a)
+ + (i14 = ((i14 ^= i00 ^ i06 ^ i11) << 1) | (i14 >>> 31)); // W78
+ hA +=
+ a += ((b << 5) | (b >>> 27)) + 0xca62c1d6 // K79
+ + (c ^ (d = (d << 30) | (d >>> 2)) ^ e) // Parity(c,d,e)
+ + (i15 = ((i15 ^= i01 ^ i07 ^ i12) << 1) | (i15 >>> 31)); // W79
+ hE += e;
+ hD += d;
+ hC += /* c= */ (c << 30) | (c >>> 2);
+ }
+}
diff --git a/src/net/i2p/crypto/SHA1Test.java b/src/net/i2p/crypto/SHA1Test.java
new file mode 100644
index 0000000..69ad3be
--- /dev/null
+++ b/src/net/i2p/crypto/SHA1Test.java
@@ -0,0 +1,191 @@
+package net.i2p.crypto;
+/* @(#)SHA1Test.java 1.10 2004-04-24
+ * This file was freely contributed to the LimeWire project and is covered
+ * by its existing GPL licence, but it may be used individually as a public
+ * domain implementation of a published algorithm (see below for references).
+ * It was also freely contributed to the Bitzi public domain sources.
+ * @author Philippe Verdy
+ */
+
+/* Sun may wish to change the following package name, if integrating this
+ * class in the Sun JCE Security Provider for Java 1.5 (code-named Tiger).
+ */
+//package com.bitzi.util;
+
+import java.security.*;
+
+public class SHA1Test {
+
+ private static final SHA1 hash = new SHA1();
+
+ public static void main(String args[]) {
+// http://csrc.nist.gov/publications/fips/fips180-2/fips180-2.pdf
+ System.out.println("****************************************");
+ System.out.println("* Basic FIPS PUB 180-1 test vectors... *");
+ System.out.println("****************************************");
+ tst(1, 1,
+ "abc",
+ "A9993E36 4706816A BA3E2571 7850C26C 9CD0D89D");
+ tst(1, 2,
+ "abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq",
+ "84983E44 1C3BD26e BAAE4AA1 F95129E5 E54670F1");
+ tst(1, 3, /* one million bytes */
+ 1000000, "a",
+ "34AA973C D4C4DAA4 F61EEB2B DBAD2731 6534016F");
+ System.out.println();
+
+// http://csrc.ncsl.nist.gov/cryptval/shs/SHAVS.pdf
+ System.out.println("********************************************************");
+ System.out.println("* SHSV Examples of the selected short messages test... *");
+ System.out.println("********************************************************");
+ tst(2, 2, new byte[] {/* 8 bits, i.e. 1 byte */
+ (byte)0x5e},
+ "5e6f80a3 4a9798ca fc6a5db9 6cc57ba4 c4db59c2");
+ tst(2, 4, new byte[] {/* 128 bits, i.e. 16 bytes */
+ (byte)0x9a,(byte)0x7d,(byte)0xfd,(byte)0xf1,(byte)0xec,(byte)0xea,(byte)0xd0,(byte)0x6e,
+ (byte)0xd6,(byte)0x46,(byte)0xaa,(byte)0x55,(byte)0xfe,(byte)0x75,(byte)0x71,(byte)0x46},
+ "82abff66 05dbe1c1 7def12a3 94fa22a8 2b544a35");
+ System.out.println();
+
+ System.out.println("*******************************************************");
+ System.out.println("* SHSV Examples of the selected long messages test... *");
+ System.out.println("*******************************************************");
+ tst(3, 2, new byte[] {/* 1304 bits, i.e. 163 bytes */
+ (byte)0xf7,(byte)0x8f,(byte)0x92,(byte)0x14,(byte)0x1b,(byte)0xcd,(byte)0x17,(byte)0x0a,
+ (byte)0xe8,(byte)0x9b,(byte)0x4f,(byte)0xba,(byte)0x15,(byte)0xa1,(byte)0xd5,(byte)0x9f,
+ (byte)0x3f,(byte)0xd8,(byte)0x4d,(byte)0x22,(byte)0x3c,(byte)0x92,(byte)0x51,(byte)0xbd,
+ (byte)0xac,(byte)0xbb,(byte)0xae,(byte)0x61,(byte)0xd0,(byte)0x5e,(byte)0xd1,(byte)0x15,
+ (byte)0xa0,(byte)0x6a,(byte)0x7c,(byte)0xe1,(byte)0x17,(byte)0xb7,(byte)0xbe,(byte)0xea,
+ (byte)0xd2,(byte)0x44,(byte)0x21,(byte)0xde,(byte)0xd9,(byte)0xc3,(byte)0x25,(byte)0x92,
+ (byte)0xbd,(byte)0x57,(byte)0xed,(byte)0xea,(byte)0xe3,(byte)0x9c,(byte)0x39,(byte)0xfa,
+ (byte)0x1f,(byte)0xe8,(byte)0x94,(byte)0x6a,(byte)0x84,(byte)0xd0,(byte)0xcf,(byte)0x1f,
+ (byte)0x7b,(byte)0xee,(byte)0xad,(byte)0x17,(byte)0x13,(byte)0xe2,(byte)0xe0,(byte)0x95,
+ (byte)0x98,(byte)0x97,(byte)0x34,(byte)0x7f,(byte)0x67,(byte)0xc8,(byte)0x0b,(byte)0x04,
+ (byte)0x00,(byte)0xc2,(byte)0x09,(byte)0x81,(byte)0x5d,(byte)0x6b,(byte)0x10,(byte)0xa6,
+ (byte)0x83,(byte)0x83,(byte)0x6f,(byte)0xd5,(byte)0x56,(byte)0x2a,(byte)0x56,(byte)0xca,
+ (byte)0xb1,(byte)0xa2,(byte)0x8e,(byte)0x81,(byte)0xb6,(byte)0x57,(byte)0x66,(byte)0x54,
+ (byte)0x63,(byte)0x1c,(byte)0xf1,(byte)0x65,(byte)0x66,(byte)0xb8,(byte)0x6e,(byte)0x3b,
+ (byte)0x33,(byte)0xa1,(byte)0x08,(byte)0xb0,(byte)0x53,(byte)0x07,(byte)0xc0,(byte)0x0a,
+ (byte)0xff,(byte)0x14,(byte)0xa7,(byte)0x68,(byte)0xed,(byte)0x73,(byte)0x50,(byte)0x60,
+ (byte)0x6a,(byte)0x0f,(byte)0x85,(byte)0xe6,(byte)0xa9,(byte)0x1d,(byte)0x39,(byte)0x6f,
+ (byte)0x5b,(byte)0x5c,(byte)0xbe,(byte)0x57,(byte)0x7f,(byte)0x9b,(byte)0x38,(byte)0x80,
+ (byte)0x7c,(byte)0x7d,(byte)0x52,(byte)0x3d,(byte)0x6d,(byte)0x79,(byte)0x2f,(byte)0x6e,
+ (byte)0xbc,(byte)0x24,(byte)0xa4,(byte)0xec,(byte)0xf2,(byte)0xb3,(byte)0xa4,(byte)0x27,
+ (byte)0xcd,(byte)0xbb,(byte)0xfb},
+ "cb0082c8 f197d260 991ba6a4 60e76e20 2bad27b3");
+ System.out.println();
+
+// See also http://csrc.ncsl.nist.gov/cryptval/shs/sha1-vectors.zip
+
+ {
+ final int RETRIES = 10;
+ final int ITERATIONS = 2000;
+ final int BLOCKSIZE = 65536;
+ byte[] input = new byte[BLOCKSIZE];
+ for (int i = BLOCKSIZE; --i >= 0; )
+ input[i] = (byte)i;
+ long best = 0;
+ for (int i = 0; i < 1000; i++) // training for stable measure
+ System.currentTimeMillis();
+
+ for (int retry = 0; retry < RETRIES; retry++) {
+ long t0 = System.currentTimeMillis();
+ for (int i = ITERATIONS; --i >= 0; );
+ long t1 = System.currentTimeMillis();
+ for (int i = ITERATIONS; --i >= 0; )
+ hash.engineUpdate(input, 0, BLOCKSIZE);
+ long t2 = System.currentTimeMillis();
+ long time = (t2 - t1) - (t1 - t0);
+ if (retry == 0 || time < best)
+ best = time;
+ }
+ hash.engineReset();
+ double rate = 1000.0 * ITERATIONS * BLOCKSIZE / best;
+ System.out.println("Our rate = " +
+ (float)(rate * 8) + " bits/s = " +
+ (float)(rate / (1024 * 1024)) + " Megabytes/s");
+ // Java 1.5 beta-b32c, on Athlon XP 1800+:
+ // with java -client: 48.21 Megabytes/s.
+ // with java -server: 68.23 Megabytes/s.
+
+ try {
+ MessageDigest md = MessageDigest.getInstance("SHA");
+ for (int retry = 0; retry < RETRIES; retry++) {
+ long t0 = System.currentTimeMillis();
+ for (int i = ITERATIONS; --i >= 0; );
+ long t1 = System.currentTimeMillis();
+ for (int i = ITERATIONS; --i >= 0; )
+ md.update(input, 0, BLOCKSIZE);
+ long t2 = System.currentTimeMillis();
+ long time = (t2 - t1) - (t1 - t0);
+ if (retry == 0 || time < best)
+ best = time;
+ }
+ md.reset();
+ rate = 1000.0 * ITERATIONS * BLOCKSIZE / best;
+ System.out.println("JCE rate = " +
+ (float)(rate * 8) + " bits/s = " +
+ (float)(rate / (1024 * 1024)) + " Megabytes/s");
+ } catch (NoSuchAlgorithmException nsae) {
+ System.out.println("No SHA algorithm in local JCE Security Providers");
+ }
+ // Java 1.5 beta-b32c, on Athlon XP 1800+:
+ // with java -client: 23.20 Megabytes/s.
+ // with java -server: 45.72 Megabytes/s.
+ }
+ }
+
+ private static final boolean tst(final int set, final int vector,
+ final String source,
+ final String expect) {
+ byte[] input = new byte[source.length()];
+ for (int i = 0; i < input.length; i++)
+ input[i] = (byte)source.charAt(i);
+ return tst(set, vector, input, expect);
+ }
+
+ private static final boolean tst(final int set, final int vector,
+ final byte[] input,
+ final String expect) {
+ System.out.print("Set " + set + ", vector# " + vector + ": ");
+ hash.engineUpdate(input, 0, input.length);
+ return tstResult(expect);
+ }
+
+ private static final boolean tst(final int set, final int vector,
+ final int times, final String source,
+ final String expect) {
+ byte[] input = new byte[source.length()];
+ for (int i = 0; i < input.length; i++)
+ input[i] = (byte)source.charAt(i);
+ System.out.print("Set " + set + ", vector# " + vector + ": ");
+ for (int i = 0; i < times; i++)
+ hash.engineUpdate(input, 0, input.length);
+ return tstResult(expect);
+ }
+
+ private static final boolean tstResult(String expect) {
+ final String result = toHex(hash.engineDigest());
+ expect = expect.toUpperCase();
+ if (!expect.equals(result)) {
+ System.out.println("**************** WRONG ***************");
+ System.out.println(" expect: " + expect);
+ System.out.println(" result: " + result);
+ return false;
+ }
+ System.out.println("OK");
+ return true;
+ }
+
+ private static final String toHex(final byte[] bytes) {
+ StringBuffer buf = new StringBuffer(bytes.length * 2);
+ for (int i = 0; i < bytes.length; i++) {
+ if ((i & 3) == 0 && i != 0)
+ buf.append(' ');
+ buf.append(HEX.charAt((bytes[i] >> 4) & 0xF))
+ .append(HEX.charAt( bytes[i] & 0xF));
+ }
+ return buf.toString();
+ }
+ private static final String HEX = "0123456789ABCDEF";
+}
diff --git a/src/net/i2p/crypto/SHA256Generator.java b/src/net/i2p/crypto/SHA256Generator.java
new file mode 100644
index 0000000..96d533a
--- /dev/null
+++ b/src/net/i2p/crypto/SHA256Generator.java
@@ -0,0 +1,78 @@
+package net.i2p.crypto;
+
+import java.util.Arrays;
+import java.util.ArrayList;
+import java.util.List;
+import net.i2p.I2PAppContext;
+import net.i2p.data.Base64;
+import net.i2p.data.Hash;
+
+import gnu.crypto.hash.Sha256Standalone;
+
+/**
+ * Defines a wrapper for SHA-256 operation. All the good stuff occurs
+ * in the GNU-Crypto {@link gnu.crypto.hash.Sha256Standalone}
+ *
+ */
+public final class SHA256Generator {
+ private List _digests;
+ private List _digestsGnu;
+ public SHA256Generator(I2PAppContext context) {
+ _digests = new ArrayList(32);
+ _digestsGnu = new ArrayList(32);
+ }
+
+ public static final SHA256Generator getInstance() {
+ return I2PAppContext.getGlobalContext().sha();
+ }
+
+ /** Calculate the SHA-256 has of the source
+ * @param source what to hash
+ * @return hash of the source
+ */
+ public final Hash calculateHash(byte[] source) {
+ return calculateHash(source, 0, source.length);
+ }
+ public final Hash calculateHash(byte[] source, int start, int len) {
+ Sha256Standalone digest = acquireGnu();
+ digest.update(source, start, len);
+ byte rv[] = digest.digest();
+ releaseGnu(digest);
+ return new Hash(rv);
+ }
+
+ public final void calculateHash(byte[] source, int start, int len, byte out[], int outOffset) {
+ Sha256Standalone digest = acquireGnu();
+ digest.update(source, start, len);
+ byte rv[] = digest.digest();
+ releaseGnu(digest);
+ System.arraycopy(rv, 0, out, outOffset, rv.length);
+ }
+
+ private Sha256Standalone acquireGnu() {
+ Sha256Standalone rv = null;
+ synchronized (_digestsGnu) {
+ if (_digestsGnu.size() > 0)
+ rv = (Sha256Standalone)_digestsGnu.remove(0);
+ }
+ if (rv != null)
+ rv.reset();
+ else
+ rv = new Sha256Standalone();
+ return rv;
+ }
+
+ private void releaseGnu(Sha256Standalone digest) {
+ synchronized (_digestsGnu) {
+ if (_digestsGnu.size() < 32) {
+ _digestsGnu.add(digest);
+ }
+ }
+ }
+
+ public static void main(String args[]) {
+ I2PAppContext ctx = I2PAppContext.getGlobalContext();
+ for (int i = 0; i < args.length; i++)
+ System.out.println("SHA256 [" + args[i] + "] = [" + Base64.encode(ctx.sha().calculateHash(args[i].getBytes()).getData()) + "]");
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/crypto/SessionKeyManager.java b/src/net/i2p/crypto/SessionKeyManager.java
new file mode 100644
index 0000000..5b60934
--- /dev/null
+++ b/src/net/i2p/crypto/SessionKeyManager.java
@@ -0,0 +1,133 @@
+package net.i2p.crypto;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.util.Set;
+
+import net.i2p.I2PAppContext;
+import net.i2p.data.PublicKey;
+import net.i2p.data.SessionKey;
+import net.i2p.data.SessionTag;
+
+/**
+ * Manage the session keys and session tags used for encryption and decryption.
+ * This base implementation simply ignores sessions and acts as if everything is
+ * unknown (and hence always forces a full ElGamal encryption for each message).
+ * A more intelligent subclass should manage and persist keys and tags.
+ *
+ */
+public class SessionKeyManager {
+ /** session key managers must be created through an app context */
+ protected SessionKeyManager(I2PAppContext context) { // nop
+ }
+
+ /** see above */
+ private SessionKeyManager() { // nop
+ }
+
+ /**
+ * Retrieve the session key currently associated with encryption to the target,
+ * or null if a new session key should be generated.
+ *
+ */
+ public SessionKey getCurrentKey(PublicKey target) {
+ return null;
+ }
+
+ /**
+ * Associate a new session key with the specified target. Metrics to determine
+ * when to expire that key begin with this call.
+ *
+ */
+ public void createSession(PublicKey target, SessionKey key) { // nop
+ }
+
+ /**
+ * Generate a new session key and associate it with the specified target.
+ *
+ */
+ public SessionKey createSession(PublicKey target) {
+ SessionKey key = KeyGenerator.getInstance().generateSessionKey();
+ createSession(target, key);
+ return key;
+ }
+
+ /**
+ * Retrieve the next available session tag for identifying the use of the given
+ * key when communicating with the target. If this returns null, no tags are
+ * available so ElG should be used with the given key (a new sessionKey should
+ * NOT be used)
+ *
+ */
+ public SessionTag consumeNextAvailableTag(PublicKey target, SessionKey key) {
+ return null;
+ }
+
+ /**
+ * Determine (approximately) how many available session tags for the current target
+ * have been confirmed and are available
+ *
+ */
+ public int getAvailableTags(PublicKey target, SessionKey key) {
+ return 0;
+ }
+
+ /**
+ * Determine how long the available tags will be available for before expiring, in
+ * milliseconds
+ */
+ public long getAvailableTimeLeft(PublicKey target, SessionKey key) {
+ return 0;
+ }
+
+ /**
+ * Take note of the fact that the given sessionTags associated with the key for
+ * encryption to the target have definitely been received at the target (aka call this
+ * method after receiving an ack to a message delivering them)
+ *
+ */
+ public void tagsDelivered(PublicKey target, SessionKey key, Set sessionTags) { // nop
+ }
+
+ /**
+ * Mark all of the tags delivered to the target up to this point as invalid, since the peer
+ * has failed to respond when they should have. This call essentially lets the system recover
+ * from corrupted tag sets and crashes
+ *
+ */
+ public void failTags(PublicKey target) { // nop
+ }
+
+ /**
+ * Accept the given tags and associate them with the given key for decryption
+ *
+ */
+ public void tagsReceived(SessionKey key, Set sessionTags) { // nop
+ }
+
+ /**
+ * Determine if we have received a session key associated with the given session tag,
+ * and if so, discard it (but keep track for frequent dups) and return the decryption
+ * key it was received with (via tagsReceived(...)). returns null if no session key
+ * matches
+ *
+ */
+ public SessionKey consumeTag(SessionTag tag) {
+ return null;
+ }
+
+ /**
+ * Called when the system is closing down, instructing the session key manager to take
+ * whatever precautions are necessary (saving state, etc)
+ *
+ */
+ public void shutdown() { // nop
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/crypto/TransientSessionKeyManager.java b/src/net/i2p/crypto/TransientSessionKeyManager.java
new file mode 100644
index 0000000..fbe0448
--- /dev/null
+++ b/src/net/i2p/crypto/TransientSessionKeyManager.java
@@ -0,0 +1,731 @@
+package net.i2p.crypto;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.util.ArrayList;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import net.i2p.I2PAppContext;
+import net.i2p.data.DataHelper;
+import net.i2p.data.PublicKey;
+import net.i2p.data.SessionKey;
+import net.i2p.data.SessionTag;
+import net.i2p.util.Log;
+import net.i2p.util.SimpleTimer;
+
+/**
+ * Implement the session key management, but keep everything in memory (don't write
+ * to disk). However, this being java, we cannot guarantee that the keys aren't swapped
+ * out to disk so this should not be considered secure in that sense.
+ *
+ */
+class TransientSessionKeyManager extends SessionKeyManager {
+ private Log _log;
+ /** Map allowing us to go from the targeted PublicKey to the OutboundSession used */
+ private Map _outboundSessions;
+ /** Map allowing us to go from a SessionTag to the containing TagSet */
+ private Map _inboundTagSets;
+ protected I2PAppContext _context;
+
+ /**
+ * Let session tags sit around for 10 minutes before expiring them. We can now have such a large
+ * value since there is the persistent session key manager. This value is for outbound tags -
+ * inbound tags are managed by SESSION_LIFETIME_MAX_MS
+ *
+ */
+ public final static long SESSION_TAG_DURATION_MS = 10 * 60 * 1000;
+ /**
+ * Keep unused inbound session tags around for up to 12 minutes (2 minutes longer than
+ * session tags are used on the outbound side so that no reasonable network lag
+ * can cause failed decrypts)
+ *
+ */
+ public final static long SESSION_LIFETIME_MAX_MS = SESSION_TAG_DURATION_MS + 5 * 60 * 1000;
+ public final static int MAX_INBOUND_SESSION_TAGS = 500 * 1000; // this will consume at most a few MB
+
+ /**
+ * The session key manager should only be constructed and accessed through the
+ * application context. This constructor should only be used by the
+ * appropriate application context itself.
+ *
+ */
+ public TransientSessionKeyManager(I2PAppContext context) {
+ super(context);
+ _log = context.logManager().getLog(TransientSessionKeyManager.class);
+ _context = context;
+ _outboundSessions = new HashMap(1024);
+ _inboundTagSets = new HashMap(1024);
+ context.statManager().createRateStat("crypto.sessionTagsExpired", "How many tags/sessions are expired?", "Encryption", new long[] { 10*60*1000, 60*60*1000, 3*60*60*1000 });
+ context.statManager().createRateStat("crypto.sessionTagsRemaining", "How many tags/sessions are remaining after a cleanup?", "Encryption", new long[] { 10*60*1000, 60*60*1000, 3*60*60*1000 });
+ SimpleTimer.getInstance().addEvent(new CleanupEvent(), 60*1000);
+ }
+ private TransientSessionKeyManager() { this(null); }
+
+ private class CleanupEvent implements SimpleTimer.TimedEvent {
+ public void timeReached() {
+ long beforeExpire = _context.clock().now();
+ int expired = aggressiveExpire();
+ long expireTime = _context.clock().now() - beforeExpire;
+ _context.statManager().addRateData("crypto.sessionTagsExpired", expired, expireTime);
+ SimpleTimer.getInstance().addEvent(CleanupEvent.this, 60*1000);
+ }
+ }
+
+ /** TagSet */
+ protected Set getInboundTagSets() {
+ synchronized (_inboundTagSets) {
+ return new HashSet(_inboundTagSets.values());
+ }
+ }
+
+ /** OutboundSession */
+ protected Set getOutboundSessions() {
+ synchronized (_outboundSessions) {
+ return new HashSet(_outboundSessions.values());
+ }
+ }
+
+ protected void setData(Set inboundTagSets, Set outboundSessions) {
+ if (_log.shouldLog(Log.INFO))
+ _log.info("Loading " + inboundTagSets.size() + " inbound tag sets, and "
+ + outboundSessions.size() + " outbound sessions");
+ Map tagSets = new HashMap(inboundTagSets.size());
+ for (Iterator iter = inboundTagSets.iterator(); iter.hasNext();) {
+ TagSet ts = (TagSet) iter.next();
+ for (Iterator tsIter = ts.getTags().iterator(); tsIter.hasNext();) {
+ SessionTag tag = (SessionTag) tsIter.next();
+ tagSets.put(tag, ts);
+ }
+ }
+ synchronized (_inboundTagSets) {
+ _inboundTagSets.clear();
+ _inboundTagSets.putAll(tagSets);
+ }
+ Map sessions = new HashMap(outboundSessions.size());
+ for (Iterator iter = outboundSessions.iterator(); iter.hasNext();) {
+ OutboundSession sess = (OutboundSession) iter.next();
+ sessions.put(sess.getTarget(), sess);
+ }
+ synchronized (_outboundSessions) {
+ _outboundSessions.clear();
+ _outboundSessions.putAll(sessions);
+ }
+ }
+
+ /**
+ * Retrieve the session key currently associated with encryption to the target,
+ * or null if a new session key should be generated.
+ *
+ */
+ public SessionKey getCurrentKey(PublicKey target) {
+ OutboundSession sess = getSession(target);
+ if (sess == null) return null;
+ long now = _context.clock().now();
+ if (sess.getLastUsedDate() < now - SESSION_LIFETIME_MAX_MS) {
+ if (_log.shouldLog(Log.INFO))
+ _log.info("Expiring old session key established on "
+ + new Date(sess.getEstablishedDate())
+ + " but not used for "
+ + (now-sess.getLastUsedDate())
+ + "ms with target " + target);
+ return null;
+ }
+ return sess.getCurrentKey();
+ }
+
+ /**
+ * Associate a new session key with the specified target. Metrics to determine
+ * when to expire that key begin with this call.
+ *
+ */
+ public void createSession(PublicKey target, SessionKey key) {
+ OutboundSession sess = new OutboundSession(target);
+ sess.setCurrentKey(key);
+ addSession(sess);
+ }
+
+ /**
+ * Retrieve the next available session tag for identifying the use of the given
+ * key when communicating with the target. If this returns null, no tags are
+ * available so ElG should be used with the given key (a new sessionKey should
+ * NOT be used)
+ *
+ */
+ public SessionTag consumeNextAvailableTag(PublicKey target, SessionKey key) {
+ OutboundSession sess = getSession(target);
+ if (sess == null) {
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("No session for " + target);
+ return null;
+ }
+ if (sess.getCurrentKey().equals(key)) {
+ SessionTag nxt = sess.consumeNext();
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Tag consumed: " + nxt + " with key: " + key.toBase64());
+ return nxt;
+ }
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Key does not match existing key, no tag");
+ return null;
+ }
+
+ /**
+ * Determine (approximately) how many available session tags for the current target
+ * have been confirmed and are available
+ *
+ */
+ public int getAvailableTags(PublicKey target, SessionKey key) {
+ OutboundSession sess = getSession(target);
+ if (sess == null) { return 0; }
+ if (sess.getCurrentKey().equals(key)) {
+ return sess.availableTags();
+ }
+ return 0;
+ }
+
+ /**
+ * Determine how long the available tags will be available for before expiring, in
+ * milliseconds
+ */
+ public long getAvailableTimeLeft(PublicKey target, SessionKey key) {
+ OutboundSession sess = getSession(target);
+ if (sess == null) { return 0; }
+ if (sess.getCurrentKey().equals(key)) {
+ long end = sess.getLastExpirationDate();
+ if (end <= 0)
+ return 0;
+ else
+ return end - _context.clock().now();
+ }
+ return 0;
+ }
+
+ /**
+ * Take note of the fact that the given sessionTags associated with the key for
+ * encryption to the target have definitely been received at the target (aka call this
+ * method after receiving an ack to a message delivering them)
+ *
+ */
+ public void tagsDelivered(PublicKey target, SessionKey key, Set sessionTags) {
+ if (_log.shouldLog(Log.DEBUG)) {
+ //_log.debug("Tags delivered to set " + set + " on session " + sess);
+ if (sessionTags.size() > 0)
+ _log.debug("Tags delivered: " + sessionTags.size() + " for key: " + key.toBase64() + ": " + sessionTags);
+ }
+ OutboundSession sess = getSession(target);
+ if (sess == null) {
+ createSession(target, key);
+ sess = getSession(target);
+ }
+ sess.setCurrentKey(key);
+ TagSet set = new TagSet(sessionTags, key, _context.clock().now());
+ sess.addTags(set);
+ }
+
+ /**
+ * Mark all of the tags delivered to the target up to this point as invalid, since the peer
+ * has failed to respond when they should have. This call essentially lets the system recover
+ * from corrupted tag sets and crashes
+ *
+ */
+ public void failTags(PublicKey target) {
+ removeSession(target);
+ }
+
+ /**
+ * Accept the given tags and associate them with the given key for decryption
+ *
+ */
+ public void tagsReceived(SessionKey key, Set sessionTags) {
+ int overage = 0;
+ TagSet tagSet = new TagSet(sessionTags, key, _context.clock().now());
+ TagSet old = null;
+ SessionTag dupTag = null;
+ for (Iterator iter = sessionTags.iterator(); iter.hasNext();) {
+ SessionTag tag = (SessionTag) iter.next();
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Receiving tag " + tag + " for key " + key.toBase64() + " / " + key.toString() + ": tagSet: " + tagSet);
+ synchronized (_inboundTagSets) {
+ old = (TagSet)_inboundTagSets.put(tag, tagSet);
+ overage = _inboundTagSets.size() - MAX_INBOUND_SESSION_TAGS;
+ if (old != null) {
+ if (!old.getAssociatedKey().equals(tagSet.getAssociatedKey())) {
+ _inboundTagSets.remove(tag);
+ dupTag = tag;
+ break;
+ } else {
+ old = null; // ignore the dup
+ }
+ }
+ }
+ }
+
+ if (old != null) {
+ // drop both old and tagSet tags
+ synchronized (_inboundTagSets) {
+ for (Iterator iter = old.getTags().iterator(); iter.hasNext(); ) {
+ SessionTag tag = (SessionTag)iter.next();
+ _inboundTagSets.remove(tag);
+ }
+ for (Iterator iter = sessionTags.iterator(); iter.hasNext(); ) {
+ SessionTag tag = (SessionTag)iter.next();
+ _inboundTagSets.remove(tag);
+ }
+ }
+
+ if (_log.shouldLog(Log.WARN)) {
+ _log.warn("Multiple tags matching! tagSet: " + tagSet + " and old tagSet: " + old + " tag: " + dupTag + "/" + dupTag.toBase64());
+ _log.warn("Earlier tag set creation: " + old + ": key=" + old.getAssociatedKey().toBase64(), old.getCreatedBy());
+ _log.warn("Current tag set creation: " + tagSet + ": key=" + tagSet.getAssociatedKey().toBase64(), tagSet.getCreatedBy());
+ }
+ }
+
+ if (overage > 0)
+ clearExcess(overage);
+
+ if ( (sessionTags.size() <= 0) && (_log.shouldLog(Log.DEBUG)) )
+ _log.debug("Received 0 tags for key " + key);
+ if (false) aggressiveExpire();
+ }
+
+ /**
+ * remove a bunch of arbitrarily selected tags, then drop all of
+ * the associated tag sets. this is very time consuming - iterating
+ * across the entire _inboundTagSets map, but it should be very rare,
+ * and the stats we can gather can hopefully reduce the frequency of
+ * using too many session tags in the future
+ *
+ */
+ private void clearExcess(int overage) {
+ long now = _context.clock().now();
+ int old = 0;
+ int large = 0;
+ int absurd = 0;
+ int recent = 0;
+ int tags = 0;
+ int toRemove = overage * 2;
+ List removed = new ArrayList(toRemove);
+ synchronized (_inboundTagSets) {
+ for (Iterator iter = _inboundTagSets.values().iterator(); iter.hasNext(); ) {
+ TagSet set = (TagSet)iter.next();
+ int size = set.getTags().size();
+ if (size > 1000)
+ absurd++;
+ if (size > 100)
+ large++;
+ if (now - set.getDate() > SESSION_LIFETIME_MAX_MS)
+ old++;
+ else if (now - set.getDate() < 1*60*1000)
+ recent++;
+
+ if ((removed.size() < (toRemove)) || (now - set.getDate() > SESSION_LIFETIME_MAX_MS))
+ removed.add(set);
+ }
+ for (int i = 0; i < removed.size(); i++) {
+ TagSet cur = (TagSet)removed.get(i);
+ for (Iterator iter = cur.getTags().iterator(); iter.hasNext(); ) {
+ SessionTag tag = (SessionTag)iter.next();
+ _inboundTagSets.remove(tag);
+ tags++;
+ }
+ }
+ }
+ if (_log.shouldLog(Log.CRIT))
+ _log.log(Log.CRIT, "TOO MANY SESSION TAGS! removing " + removed
+ + " tag sets arbitrarily, with " + tags + " tags,"
+ + "where there are " + old + " long lasting sessions, "
+ + recent + " ones created in the last minute, and "
+ + large + " sessions with more than 100 tags (and "
+ + absurd + " with more than 1000!), leaving a total of "
+ + _inboundTagSets.size() + " tags behind");
+ }
+
+ /**
+ * Determine if we have received a session key associated with the given session tag,
+ * and if so, discard it (but keep track for frequent dups) and return the decryption
+ * key it was received with (via tagsReceived(...)). returns null if no session key
+ * matches
+ *
+ */
+ public SessionKey consumeTag(SessionTag tag) {
+ if (false) aggressiveExpire();
+ synchronized (_inboundTagSets) {
+ TagSet tagSet = (TagSet) _inboundTagSets.remove(tag);
+ if (tagSet == null) {
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Cannot consume tag " + tag + " as it is not known");
+ return null;
+ }
+ tagSet.consume(tag);
+
+ SessionKey key = tagSet.getAssociatedKey();
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Consuming tag " + tag.toString() + " for sessionKey " + key.toBase64() + " / " + key.toString() + " on tagSet: " + tagSet);
+ return key;
+ }
+ }
+
+ private OutboundSession getSession(PublicKey target) {
+ synchronized (_outboundSessions) {
+ return (OutboundSession) _outboundSessions.get(target);
+ }
+ }
+
+ private void addSession(OutboundSession sess) {
+ synchronized (_outboundSessions) {
+ _outboundSessions.put(sess.getTarget(), sess);
+ }
+ }
+
+ private void removeSession(PublicKey target) {
+ if (target == null) return;
+ OutboundSession session = null;
+ synchronized (_outboundSessions) {
+ session = (OutboundSession)_outboundSessions.remove(target);
+ }
+ if ( (session != null) && (_log.shouldLog(Log.WARN)) )
+ _log.warn("Removing session tags with " + session.availableTags() + " available for "
+ + (session.getLastExpirationDate()-_context.clock().now())
+ + "ms more", new Exception("Removed by"));
+ }
+
+ /**
+ * Aggressively expire inbound tag sets and outbound sessions
+ *
+ * @return number of tag sets expired
+ */
+ public int aggressiveExpire() {
+ int removed = 0;
+ int remaining = 0;
+ long now = _context.clock().now();
+ StringBuffer buf = null;
+ StringBuffer bufSummary = null;
+ if (_log.shouldLog(Log.DEBUG)) {
+ buf = new StringBuffer(128);
+ buf.append("Expiring inbound: ");
+ bufSummary = new StringBuffer(1024);
+ }
+ synchronized (_inboundTagSets) {
+ for (Iterator iter = _inboundTagSets.keySet().iterator(); iter.hasNext();) {
+ SessionTag tag = (SessionTag) iter.next();
+ TagSet ts = (TagSet) _inboundTagSets.get(tag);
+ long age = now - ts.getDate();
+ if (age > SESSION_LIFETIME_MAX_MS) {
+ //if (ts.getDate() < now - SESSION_LIFETIME_MAX_MS) {
+ iter.remove();
+ removed++;
+ if (buf != null)
+ buf.append(tag.toString()).append(" @ age ").append(DataHelper.formatDuration(age));
+ } else if (false && (bufSummary != null) ) {
+ bufSummary.append("\nTagSet: " + ts.toString() + ", key: " + ts.getAssociatedKey().toBase64()+"/" + ts.getAssociatedKey().toString()
+ + ": tag: " + tag.toString());
+ }
+ }
+ remaining = _inboundTagSets.size();
+ }
+ _context.statManager().addRateData("crypto.sessionTagsRemaining", remaining, 0);
+ if ( (buf != null) && (removed > 0) )
+ _log.debug(buf.toString());
+ if (bufSummary != null)
+ _log.debug("Cleaning up with remaining: " + bufSummary.toString());
+
+ //_log.warn("Expiring tags: [" + tagsToDrop + "]");
+
+ synchronized (_outboundSessions) {
+ for (Iterator iter = _outboundSessions.keySet().iterator(); iter.hasNext();) {
+ PublicKey key = (PublicKey) iter.next();
+ OutboundSession sess = (OutboundSession) _outboundSessions.get(key);
+ removed += sess.expireTags();
+ if (sess.availableTags() <= 0) {
+ iter.remove();
+ removed++;
+ }
+ }
+ }
+ return removed;
+ }
+
+ public String renderStatusHTML() {
+ StringBuffer buf = new StringBuffer(1024);
+ buf.append("Inbound sessions
");
+ buf.append("");
+ Set inbound = getInboundTagSets();
+ Map inboundSets = new HashMap(inbound.size());
+ for (Iterator iter = inbound.iterator(); iter.hasNext();) {
+ TagSet ts = (TagSet) iter.next();
+ if (!inboundSets.containsKey(ts.getAssociatedKey())) inboundSets.put(ts.getAssociatedKey(), new HashSet());
+ Set sets = (Set) inboundSets.get(ts.getAssociatedKey());
+ sets.add(ts);
+ }
+ for (Iterator iter = inboundSets.keySet().iterator(); iter.hasNext();) {
+ SessionKey skey = (SessionKey) iter.next();
+ Set sets = (Set) inboundSets.get(skey);
+ buf.append("
");
+
+ buf.append(" ");
+ buf.append("Session key: ").append(skey.toBase64()).append(" ");
+ buf.append("# Sets: ").append(sets.size()).append(" ");
+ }
+ buf.append("");
+ for (Iterator siter = sets.iterator(); siter.hasNext();) {
+ TagSet ts = (TagSet) siter.next();
+ buf.append("
Outbound sessions
");
+
+ buf.append("");
+ Set outbound = getOutboundSessions();
+ for (Iterator iter = outbound.iterator(); iter.hasNext();) {
+ OutboundSession sess = (OutboundSession) iter.next();
+ buf.append("
");
+
+ return buf.toString();
+ }
+
+ class OutboundSession {
+ private PublicKey _target;
+ private SessionKey _currentKey;
+ private long _established;
+ private long _lastUsed;
+ private List _tagSets;
+
+ public OutboundSession(PublicKey target) {
+ this(target, null, _context.clock().now(), _context.clock().now(), new ArrayList());
+ }
+
+ OutboundSession(PublicKey target, SessionKey curKey, long established, long lastUsed, List tagSets) {
+ _target = target;
+ _currentKey = curKey;
+ _established = established;
+ _lastUsed = lastUsed;
+ _tagSets = tagSets;
+ }
+
+ /** list of TagSet objects */
+ List getTagSets() {
+ synchronized (_tagSets) {
+ return new ArrayList(_tagSets);
+ }
+ }
+
+ public PublicKey getTarget() {
+ return _target;
+ }
+
+ public SessionKey getCurrentKey() {
+ return _currentKey;
+ }
+
+ public void setCurrentKey(SessionKey key) {
+ _lastUsed = _context.clock().now();
+ if (_currentKey != null) {
+ if (!_currentKey.equals(key)) {
+ int dropped = 0;
+ List sets = _tagSets;
+ _tagSets = new ArrayList();
+ for (int i = 0; i < sets.size(); i++) {
+ TagSet set = (TagSet) sets.get(i);
+ dropped += set.getTags().size();
+ }
+ if (_log.shouldLog(Log.INFO))
+ _log.info("Rekeyed from " + _currentKey + " to " + key
+ + ": dropping " + dropped + " session tags");
+ }
+ }
+ _currentKey = key;
+
+ }
+
+ public long getEstablishedDate() {
+ return _established;
+ }
+
+ public long getLastUsedDate() {
+ return _lastUsed;
+ }
+
+ /**
+ * Expire old tags, returning the number of tag sets removed
+ */
+ public int expireTags() {
+ long now = _context.clock().now();
+ int removed = 0;
+ synchronized (_tagSets) {
+ for (int i = 0; i < _tagSets.size(); i++) {
+ TagSet set = (TagSet) _tagSets.get(i);
+ if (set.getDate() + SESSION_TAG_DURATION_MS <= now) {
+ _tagSets.remove(i);
+ i--;
+ removed++;
+ }
+ }
+ }
+ return removed;
+ }
+
+ public SessionTag consumeNext() {
+ long now = _context.clock().now();
+ _lastUsed = now;
+ synchronized (_tagSets) {
+ while (_tagSets.size() > 0) {
+ TagSet set = (TagSet) _tagSets.get(0);
+ if (set.getDate() + SESSION_TAG_DURATION_MS > now) {
+ SessionTag tag = set.consumeNext();
+ if (tag != null) return tag;
+ } else {
+ if (_log.shouldLog(Log.INFO))
+ _log.info("TagSet from " + new Date(set.getDate()) + " expired");
+ }
+ _tagSets.remove(0);
+ }
+ }
+ return null;
+ }
+
+ public int availableTags() {
+ int tags = 0;
+ long now = _context.clock().now();
+ synchronized (_tagSets) {
+ for (int i = 0; i < _tagSets.size(); i++) {
+ TagSet set = (TagSet) _tagSets.get(i);
+ if (set.getDate() + SESSION_TAG_DURATION_MS > now)
+ tags += set.getTags().size();
+ }
+ }
+ return tags;
+ }
+
+ /**
+ * Get the furthest away tag set expiration date - after which all of the
+ * tags will have expired
+ *
+ */
+ public long getLastExpirationDate() {
+ long last = 0;
+ synchronized (_tagSets) {
+ for (Iterator iter = _tagSets.iterator(); iter.hasNext();) {
+ TagSet set = (TagSet) iter.next();
+ if ( (set.getDate() > last) && (set.getTags().size() > 0) )
+ last = set.getDate();
+ }
+ }
+ if (last > 0)
+ return last + SESSION_TAG_DURATION_MS;
+ else
+ return -1;
+ }
+
+ public void addTags(TagSet set) {
+ _lastUsed = _context.clock().now();
+ synchronized (_tagSets) {
+ _tagSets.add(set);
+ }
+ }
+ }
+
+ static class TagSet {
+ private Set _sessionTags;
+ private SessionKey _key;
+ private long _date;
+ private Exception _createdBy;
+
+ public TagSet(Set tags, SessionKey key, long date) {
+ if (key == null) throw new IllegalArgumentException("Missing key");
+ if (tags == null) throw new IllegalArgumentException("Missing tags");
+ _sessionTags = tags;
+ _key = key;
+ _date = date;
+ if (true) {
+ long now = I2PAppContext.getGlobalContext().clock().now();
+ _createdBy = new Exception("Created by: key=" + _key.toBase64() + " on "
+ + new Date(now) + "/" + now
+ + " via " + Thread.currentThread().getName());
+ }
+ }
+
+ /** when the tag set was created */
+ public long getDate() {
+ return _date;
+ }
+
+ void setDate(long when) {
+ _date = when;
+ }
+
+ /** tags still available */
+ public Set getTags() {
+ return _sessionTags;
+ }
+
+ public SessionKey getAssociatedKey() {
+ return _key;
+ }
+
+ public boolean contains(SessionTag tag) {
+ return _sessionTags.contains(tag);
+ }
+
+ public void consume(SessionTag tag) {
+ if (contains(tag)) {
+ _sessionTags.remove(tag);
+ }
+ }
+
+ public SessionTag consumeNext() {
+ if (_sessionTags.size() <= 0) {
+ return null;
+ }
+
+ SessionTag first = (SessionTag) _sessionTags.iterator().next();
+ _sessionTags.remove(first);
+ return first;
+ }
+
+ public Exception getCreatedBy() { return _createdBy; }
+
+ public int hashCode() {
+ long rv = 0;
+ if (_key != null) rv = rv * 7 + _key.hashCode();
+ rv = rv * 7 + _date;
+ // no need to hashCode the tags, key + date should be enough
+ return (int) rv;
+ }
+
+ public boolean equals(Object o) {
+ if ((o == null) || !(o instanceof TagSet)) return false;
+ TagSet ts = (TagSet) o;
+ return DataHelper.eq(ts.getAssociatedKey(), getAssociatedKey())
+ //&& DataHelper.eq(ts.getTags(), getTags())
+ && ts.getDate() == getDate();
+ }
+ }
+}
diff --git a/src/net/i2p/crypto/TrustedUpdate.java b/src/net/i2p/crypto/TrustedUpdate.java
new file mode 100644
index 0000000..fa2ebc6
--- /dev/null
+++ b/src/net/i2p/crypto/TrustedUpdate.java
@@ -0,0 +1,639 @@
+package net.i2p.crypto;
+
+import java.io.ByteArrayInputStream;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.SequenceInputStream;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.StringTokenizer;
+
+import net.i2p.CoreVersion;
+import net.i2p.I2PAppContext;
+import net.i2p.data.DataFormatException;
+import net.i2p.data.DataHelper;
+import net.i2p.data.Signature;
+import net.i2p.data.SigningPrivateKey;
+import net.i2p.data.SigningPublicKey;
+import net.i2p.util.Log;
+
+/**
+ * ");
+ buf.append("Target key: ").append(sess.getTarget().toString()).append("
");
+ buf.append("Established: ").append(new Date(sess.getEstablishedDate())).append("
");
+ buf.append("Last Used: ").append(new Date(sess.getLastUsedDate())).append("
");
+ buf.append("# Sets: ").append(sess.getTagSets().size()).append(" ");
+ buf.append("Session key: ").append(sess.getCurrentKey().toBase64()).append(" ");
+ }
+ buf.append("");
+ for (Iterator siter = sess.getTagSets().iterator(); siter.hasNext();) {
+ TagSet ts = (TagSet) siter.next();
+ buf.append("
+ * java net.i2p.crypto.TrustedUpdate keygen publicKeyFile privateKeyFile
+ * java net.i2p.crypto.TrustedUpdate showversion signedFile
+ * java net.i2p.crypto.TrustedUpdate sign inputFile signedFile privateKeyFile version
+ * java net.i2p.crypto.TrustedUpdate verifysig signedFile
+ * java net.i2p.crypto.TrustedUpdate verifyupdate signedFile
+ *
+ *
+ * @author jrandom and smeghead
+ */
+public class TrustedUpdate {
+
+ /**
+ * gpg
without modification:gpg --verify TrustedUpdate.java
TrustedUpdate
with the default global
+ * context.
+ */
+ public TrustedUpdate() {
+ this(I2PAppContext.getGlobalContext());
+ }
+
+ /**
+ * Constructs a new TrustedUpdate
with the given
+ * {@link net.i2p.I2PAppContext}.
+ *
+ * @param context An instance of I2PAppContext
.
+ */
+ public TrustedUpdate(I2PAppContext context) {
+ _context = context;
+ _log = _context.logManager().getLog(TrustedUpdate.class);
+ _trustedKeys = new ArrayList();
+
+ String propertyTrustedKeys = context.getProperty(PROP_TRUSTED_KEYS);
+
+ if ( (propertyTrustedKeys != null) && (propertyTrustedKeys.length() > 0) ) {
+ StringTokenizer propertyTrustedKeysTokens = new StringTokenizer(propertyTrustedKeys, ",");
+
+ while (propertyTrustedKeysTokens.hasMoreTokens())
+ _trustedKeys.add(propertyTrustedKeysTokens.nextToken().trim());
+
+ } else {
+ _trustedKeys.add(DEFAULT_TRUSTED_KEY);
+ }
+ }
+
+ /**
+ * Parses command line arguments when this class is used from the command
+ * line.
+ *
+ * @param args Command line parameters.
+ */
+ public static void main(String[] args) {
+ try {
+ if ("keygen".equals(args[0])) {
+ genKeysCLI(args[1], args[2]);
+ } else if ("showversion".equals(args[0])) {
+ showVersionCLI(args[1]);
+ } else if ("sign".equals(args[0])) {
+ signCLI(args[1], args[2], args[3], args[4]);
+ } else if ("verifysig".equals(args[0])) {
+ verifySigCLI(args[1]);
+ } else if ("verifyupdate".equals(args[0])) {
+ verifyUpdateCLI(args[1]);
+ } else {
+ showUsageCLI();
+ }
+ } catch (ArrayIndexOutOfBoundsException aioobe) {
+ showUsageCLI();
+ }
+ }
+
+ /**
+ * Checks if the given version is newer than the given current version.
+ *
+ * @param currentVersion The current version.
+ * @param newVersion The version to test.
+ *
+ * @return true
if the given version is newer than the current
+ * version, otherwise false
.
+ */
+ public static final boolean needsUpdate(String currentVersion, String newVersion) {
+ StringTokenizer newVersionTokens = new StringTokenizer(sanitize(newVersion), ".");
+ StringTokenizer currentVersionTokens = new StringTokenizer(sanitize(currentVersion), ".");
+
+ while (newVersionTokens.hasMoreTokens() && currentVersionTokens.hasMoreTokens()) {
+ String newNumber = newVersionTokens.nextToken();
+ String currentNumber = currentVersionTokens.nextToken();
+
+ switch (compare(newNumber, currentNumber)) {
+ case -1: // newNumber is smaller
+ return false;
+ case 0: // eq
+ break;
+ case 1: // newNumber is larger
+ return true;
+ }
+ }
+
+ if (newVersionTokens.hasMoreTokens() && !currentVersionTokens.hasMoreTokens())
+ return true;
+
+ return false;
+ }
+
+ private static final int compare(String lop, String rop) {
+ try {
+ int left = Integer.parseInt(lop);
+ int right = Integer.parseInt(rop);
+
+ if (left < right)
+ return -1;
+ else if (left == right)
+ return 0;
+ else
+ return 1;
+ } catch (NumberFormatException nfe) {
+ return 0;
+ }
+ }
+
+ private static final void genKeysCLI(String publicKeyFile, String privateKeyFile) {
+ FileOutputStream fileOutputStream = null;
+
+ try {
+ Object signingKeypair[] = _context.keyGenerator().generateSigningKeypair();
+ SigningPublicKey signingPublicKey = (SigningPublicKey) signingKeypair[0];
+ SigningPrivateKey signingPrivateKey = (SigningPrivateKey) signingKeypair[1];
+
+ fileOutputStream = new FileOutputStream(publicKeyFile);
+ signingPublicKey.writeBytes(fileOutputStream);
+ fileOutputStream.close();
+ fileOutputStream = null;
+
+ fileOutputStream = new FileOutputStream(privateKeyFile);
+ signingPrivateKey.writeBytes(fileOutputStream);
+
+ System.out.println("\r\nPrivate key written to: " + privateKeyFile);
+ System.out.println("Public key written to: " + publicKeyFile);
+ System.out.println("\r\nPublic key: " + signingPublicKey.toBase64() + "\r\n");
+ } catch (Exception e) {
+ System.err.println("Error writing keys:");
+ e.printStackTrace();
+ } finally {
+ if (fileOutputStream != null)
+ try {
+ fileOutputStream.close();
+ } catch (IOException ioe) {
+ }
+ }
+ }
+
+ private static final String sanitize(String versionString) {
+ StringBuffer versionStringBuffer = new StringBuffer(versionString);
+
+ for (int i = 0; i < versionStringBuffer.length(); i++) {
+ if (VALID_VERSION_CHARS.indexOf(versionStringBuffer.charAt(i)) == -1) {
+ versionStringBuffer.deleteCharAt(i);
+ i--;
+ }
+ }
+
+ return versionStringBuffer.toString();
+ }
+
+ private static final void showUsageCLI() {
+ System.err.println("Usage: TrustedUpdate keygen publicKeyFile privateKeyFile");
+ System.err.println(" TrustedUpdate showversion signedFile");
+ System.err.println(" TrustedUpdate sign inputFile signedFile privateKeyFile version");
+ System.err.println(" TrustedUpdate verifysig signedFile");
+ System.err.println(" TrustedUpdate verifyupdate signedFile");
+ }
+
+ private static final void showVersionCLI(String signedFile) {
+ String versionString = new TrustedUpdate().getVersionString(signedFile);
+
+ if (versionString == "")
+ System.out.println("No version string found in file '" + signedFile + "'");
+ else
+ System.out.println("Version: " + versionString);
+ }
+
+ private static final void signCLI(String inputFile, String signedFile, String privateKeyFile, String version) {
+ Signature signature = new TrustedUpdate().sign(inputFile, signedFile, privateKeyFile, version);
+
+ if (signature != null)
+ System.out.println("Input file '" + inputFile + "' signed and written to '" + signedFile + "'");
+ else
+ System.out.println("Error signing input file '" + inputFile + "'");
+ }
+
+ private static final void verifySigCLI(String signedFile) {
+ boolean isValidSignature = new TrustedUpdate().verify(signedFile);
+
+ if (isValidSignature)
+ System.out.println("Signature VALID");
+ else
+ System.out.println("Signature INVALID");
+ }
+
+ private static final void verifyUpdateCLI(String signedFile) {
+ boolean isUpdate = new TrustedUpdate().isUpdatedVersion(CoreVersion.VERSION, signedFile);
+
+ if (isUpdate)
+ System.out.println("File version is newer than current version.");
+ else
+ System.out.println("File version is older than or equal to current version.");
+ }
+
+ /**
+ * Fetches the trusted keys for the current instance.
+ *
+ * @return An ArrayList
containting the trusted keys.
+ */
+ public ArrayList getTrustedKeys() {
+ return _trustedKeys;
+ }
+
+ /**
+ * Reads the version string from a signed update file.
+ *
+ * @param signedFile A signed update file.
+ *
+ * @return The version string read, or an empty string if no version string
+ * is present.
+ */
+ public String getVersionString(String signedFile) {
+ FileInputStream fileInputStream = null;
+
+ try {
+ fileInputStream = new FileInputStream(signedFile);
+ long skipped = fileInputStream.skip(Signature.SIGNATURE_BYTES);
+ if (skipped != Signature.SIGNATURE_BYTES)
+ return "";
+ byte[] data = new byte[VERSION_BYTES];
+ int bytesRead = DataHelper.read(fileInputStream, data);
+
+ if (bytesRead != VERSION_BYTES) {
+ return "";
+ }
+
+ for (int i = 0; i < VERSION_BYTES; i++)
+ if (data[i] == 0x00) {
+ return new String(data, 0, i, "UTF-8");
+ }
+
+ return new String(data, "UTF-8");
+ } catch (UnsupportedEncodingException uee) {
+ throw new RuntimeException("wtf, your JVM doesnt support utf-8? " + uee.getMessage());
+ } catch (IOException ioe) {
+ return "";
+ } finally {
+ if (fileInputStream != null)
+ try {
+ fileInputStream.close();
+ } catch (IOException ioe) {
+ }
+ }
+ }
+
+ /**
+ * Verifies that the version of the given signed update file is newer than
+ * currentVersion
.
+ *
+ * @param currentVersion The current version to check against.
+ * @param signedFile The signed update file.
+ *
+ * @return true
if the signed update file's version is newer
+ * than the current version, otherwise false
.
+ */
+ public boolean isUpdatedVersion(String currentVersion, String signedFile) {
+ if (needsUpdate(currentVersion, getVersionString(signedFile)))
+ return true;
+ else
+ return false;
+ }
+
+ /**
+ * Verifies the signature of a signed update file, and if it's valid and the
+ * file's version is newer than the given current version, migrates the data
+ * out of signedFile
and into outputFile
.
+ *
+ * @param currentVersion The current version to check against.
+ * @param signedFile A signed update file.
+ * @param outputFile The file to write the verified data to.
+ *
+ * @return true
if the signature and version were valid and the
+ * data was moved, false
otherwise.
+ */
+ public boolean migrateVerified(String currentVersion, String signedFile, String outputFile) {
+ if (!isUpdatedVersion(currentVersion, signedFile))
+ return false;
+
+ if (!verify(signedFile))
+ return false;
+
+ FileInputStream fileInputStream = null;
+ FileOutputStream fileOutputStream = null;
+
+ try {
+ fileInputStream = new FileInputStream(signedFile);
+ fileOutputStream = new FileOutputStream(outputFile);
+ long skipped = 0;
+
+ while (skipped < HEADER_BYTES)
+ skipped += fileInputStream.skip(HEADER_BYTES - skipped);
+
+ byte[] buffer = new byte[1024];
+ int bytesRead = 0;
+
+ while ( (bytesRead = fileInputStream.read(buffer)) != -1)
+ fileOutputStream.write(buffer, 0, bytesRead);
+ } catch (IOException ioe) {
+ return false;
+ } finally {
+ if (fileInputStream != null)
+ try {
+ fileInputStream.close();
+ } catch (IOException ioe) {
+ }
+
+ if (fileOutputStream != null)
+ try {
+ fileOutputStream.close();
+ } catch (IOException ioe) {
+ }
+ }
+
+ return true;
+ }
+
+ /**
+ * Uses the given private key to sign the given input file along with its
+ * version string using DSA. The output will be a signed update file where
+ * the first 40 bytes are the resulting DSA signature, the next 16 bytes are
+ * the input file's version string encoded in UTF-8 (padded with trailing
+ * 0h
characters if necessary), and the remaining bytes are the
+ * raw bytes of the input file.
+ *
+ * @param inputFile The file to be signed.
+ * @param signedFile The signed update file to write.
+ * @param privateKeyFile The name of the file containing the private key to
+ * sign inputFile
with.
+ * @param version The version string of the input file. If this is
+ * longer than 16 characters it will be truncated.
+ *
+ * @return An instance of {@link net.i2p.data.Signature}, or
+ * null
if there was an error.
+ */
+ public Signature sign(String inputFile, String signedFile, String privateKeyFile, String version) {
+ FileInputStream fileInputStream = null;
+ SigningPrivateKey signingPrivateKey = new SigningPrivateKey();
+
+ try {
+ fileInputStream = new FileInputStream(privateKeyFile);
+ signingPrivateKey.readBytes(fileInputStream);
+ } catch (IOException ioe) {
+ if (_log.shouldLog(Log.WARN))
+ _log.warn("Unable to load the signing key", ioe);
+
+ return null;
+ } catch (DataFormatException dfe) {
+ if (_log.shouldLog(Log.WARN))
+ _log.warn("Unable to load the signing key", dfe);
+
+ return null;
+ } finally {
+ if (fileInputStream != null)
+ try {
+ fileInputStream.close();
+ } catch (IOException ioe) {
+ }
+ }
+
+ return sign(inputFile, signedFile, signingPrivateKey, version);
+ }
+
+ /**
+ * Uses the given {@link net.i2p.data.SigningPrivateKey} to sign the given
+ * input file along with its version string using DSA. The output will be a
+ * signed update file where the first 40 bytes are the resulting DSA
+ * signature, the next 16 bytes are the input file's version string encoded
+ * in UTF-8 (padded with trailing 0h
characters if necessary),
+ * and the remaining bytes are the raw bytes of the input file.
+ *
+ * @param inputFile The file to be signed.
+ * @param signedFile The signed update file to write.
+ * @param signingPrivateKey An instance of SigningPrivateKey
+ * to sign inputFile
with.
+ * @param version The version string of the input file. If this is
+ * longer than 16 characters it will be truncated.
+ *
+ * @return An instance of {@link net.i2p.data.Signature}, or
+ * null
if there was an error.
+ */
+ public Signature sign(String inputFile, String signedFile, SigningPrivateKey signingPrivateKey, String version) {
+ byte[] versionHeader = {
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00 };
+ byte[] versionRawBytes = null;
+
+ if (version.length() > VERSION_BYTES)
+ version = version.substring(0, VERSION_BYTES);
+
+ try {
+ versionRawBytes = version.getBytes("UTF-8");
+ } catch (UnsupportedEncodingException e) {
+ throw new RuntimeException("wtf, your JVM doesnt support utf-8? " + e.getMessage());
+ }
+
+ System.arraycopy(versionRawBytes, 0, versionHeader, 0, versionRawBytes.length);
+
+ FileInputStream fileInputStream = null;
+ Signature signature = null;
+ SequenceInputStream bytesToSignInputStream = null;
+ ByteArrayInputStream versionHeaderInputStream = null;
+
+ try {
+ fileInputStream = new FileInputStream(inputFile);
+ versionHeaderInputStream = new ByteArrayInputStream(versionHeader);
+ bytesToSignInputStream = new SequenceInputStream(versionHeaderInputStream, fileInputStream);
+ signature = _context.dsa().sign(bytesToSignInputStream, signingPrivateKey);
+
+ } catch (Exception e) {
+ if (_log.shouldLog(Log.ERROR))
+ _log.error("Error signing", e);
+
+ return null;
+ } finally {
+ if (bytesToSignInputStream != null)
+ try {
+ bytesToSignInputStream.close();
+ } catch (IOException ioe) {
+ }
+
+ fileInputStream = null;
+ }
+
+ FileOutputStream fileOutputStream = null;
+
+ try {
+ fileOutputStream = new FileOutputStream(signedFile);
+ fileOutputStream.write(signature.getData());
+ fileOutputStream.write(versionHeader);
+ fileInputStream = new FileInputStream(inputFile);
+ byte[] buffer = new byte[1024];
+ int bytesRead = 0;
+ while ( (bytesRead = fileInputStream.read(buffer)) != -1)
+ fileOutputStream.write(buffer, 0, bytesRead);
+ fileOutputStream.close();
+ } catch (IOException ioe) {
+ if (_log.shouldLog(Log.WARN))
+ _log.log(Log.WARN, "Error writing signed file " + signedFile, ioe);
+
+ return null;
+ } finally {
+ if (fileInputStream != null)
+ try {
+ fileInputStream.close();
+ } catch (IOException ioe) {
+ }
+
+ if (fileOutputStream != null)
+ try {
+ fileOutputStream.close();
+ } catch (IOException ioe) {
+ }
+ }
+
+ return signature;
+ }
+
+ /**
+ * Verifies the DSA signature of a signed update file.
+ *
+ * @param signedFile The signed update file to check.
+ *
+ * @return true
if the file has a valid signature, otherwise
+ * false
.
+ */
+ public boolean verify(String signedFile) {
+ for (int i = 0; i < _trustedKeys.size(); i++) {
+ SigningPublicKey signingPublicKey = new SigningPublicKey();
+
+ try {
+ signingPublicKey.fromBase64((String)_trustedKeys.get(i));
+ boolean isValidSignature = verify(signedFile, signingPublicKey);
+
+ if (isValidSignature)
+ return true;
+ } catch (DataFormatException dfe) {
+ _log.log(Log.CRIT, "Trusted key " + i + " is not valid");
+ }
+ }
+
+ if (_log.shouldLog(Log.WARN))
+ _log.warn("None of the keys match");
+
+ return false;
+ }
+
+ /**
+ * Verifies the DSA signature of a signed update file.
+ *
+ * @param signedFile The signed update file to check.
+ * @param publicKeyFile A file containing the public key to use for
+ * verification.
+ *
+ * @return true
if the file has a valid signature, otherwise
+ * false
.
+ */
+ public boolean verify(String signedFile, String publicKeyFile) {
+ SigningPublicKey signingPublicKey = new SigningPublicKey();
+ FileInputStream fileInputStream = null;
+
+ try {
+ fileInputStream = new FileInputStream(signedFile);
+ signingPublicKey.readBytes(fileInputStream);
+ } catch (IOException ioe) {
+ if (_log.shouldLog(Log.WARN))
+ _log.warn("Unable to load the signature", ioe);
+
+ return false;
+ } catch (DataFormatException dfe) {
+ if (_log.shouldLog(Log.WARN))
+ _log.warn("Unable to load the signature", dfe);
+
+ return false;
+ } finally {
+ if (fileInputStream != null)
+ try {
+ fileInputStream.close();
+ } catch (IOException ioe) {
+ }
+ }
+
+ return verify(signedFile, signingPublicKey);
+ }
+
+ /**
+ * Verifies the DSA signature of a signed update file.
+ *
+ * @param signedFile The signed update file to check.
+ * @param signingPublicKey An instance of
+ * {@link net.i2p.data.SigningPublicKey} to use for
+ * verification.
+ *
+ * @return true
if the file has a valid signature, otherwise
+ * false
.
+ */
+ public boolean verify(String signedFile, SigningPublicKey signingPublicKey) {
+ FileInputStream fileInputStream = null;
+
+ try {
+ fileInputStream = new FileInputStream(signedFile);
+ Signature signature = new Signature();
+
+ signature.readBytes(fileInputStream);
+
+ return _context.dsa().verifySignature(signature, fileInputStream, signingPublicKey);
+ } catch (IOException ioe) {
+ if (_log.shouldLog(Log.WARN))
+ _log.warn("Error reading " + signedFile + " to verify", ioe);
+
+ return false;
+ } catch (DataFormatException dfe) {
+ if (_log.shouldLog(Log.ERROR))
+ _log.error("Error reading the signature", dfe);
+
+ return false;
+ } finally {
+ if (fileInputStream != null)
+ try {
+ fileInputStream.close();
+ } catch (IOException ioe) {
+ }
+ }
+ }
+}
diff --git a/src/net/i2p/crypto/YKGenerator.java b/src/net/i2p/crypto/YKGenerator.java
new file mode 100644
index 0000000..e10b917
--- /dev/null
+++ b/src/net/i2p/crypto/YKGenerator.java
@@ -0,0 +1,219 @@
+package net.i2p.crypto;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.math.BigInteger;
+import java.util.ArrayList;
+import java.util.List;
+
+import net.i2p.I2PAppContext;
+import net.i2p.util.Clock;
+import net.i2p.util.I2PThread;
+import net.i2p.util.Log;
+import net.i2p.util.NativeBigInteger;
+import net.i2p.util.RandomSource;
+
+/**
+ * Precalculate the Y and K for ElGamal encryption operations.
+ *
+ * This class precalcs a set of values on its own thread, using those transparently
+ * when a new instance is created. By default, the minimum threshold for creating
+ * new values for the pool is 5, and the max pool size is 10. Whenever the pool has
+ * less than the minimum, it fills it up again to the max. There is a delay after
+ * each precalculation so that the CPU isn't hosed during startup (defaulting to 10 seconds).
+ * These three parameters are controlled by java environmental variables and
+ * can be adjusted via:
+ * -Dcrypto.yk.precalc.min=40 -Dcrypto.yk.precalc.max=100 -Dcrypto.yk.precalc.delay=60000
+ *
+ * (delay is milliseconds)
+ *
+ * To disable precalculation, set min to 0
+ *
+ * @author jrandom
+ */
+class YKGenerator {
+ private final static Log _log = new Log(YKGenerator.class);
+ private static int MIN_NUM_BUILDERS = -1;
+ private static int MAX_NUM_BUILDERS = -1;
+ private static int CALC_DELAY = -1;
+ private static volatile List _values = new ArrayList(50); // list of BigInteger[] values (y and k)
+ private static Thread _precalcThread = null;
+
+ public final static String PROP_YK_PRECALC_MIN = "crypto.yk.precalc.min";
+ public final static String PROP_YK_PRECALC_MAX = "crypto.yk.precalc.max";
+ public final static String PROP_YK_PRECALC_DELAY = "crypto.yk.precalc.delay";
+ public final static String DEFAULT_YK_PRECALC_MIN = "10";
+ public final static String DEFAULT_YK_PRECALC_MAX = "30";
+ public final static String DEFAULT_YK_PRECALC_DELAY = "10000";
+
+ /** check every 30 seconds whether we have less than the minimum */
+ private final static long CHECK_DELAY = 30 * 1000;
+
+ static {
+ I2PAppContext ctx = I2PAppContext.getGlobalContext();
+ try {
+ int val = Integer.parseInt(ctx.getProperty(PROP_YK_PRECALC_MIN, DEFAULT_YK_PRECALC_MIN));
+ MIN_NUM_BUILDERS = val;
+ } catch (Throwable t) {
+ int val = Integer.parseInt(DEFAULT_YK_PRECALC_MIN);
+ MIN_NUM_BUILDERS = val;
+ }
+ try {
+ int val = Integer.parseInt(ctx.getProperty(PROP_YK_PRECALC_MAX, DEFAULT_YK_PRECALC_MAX));
+ MAX_NUM_BUILDERS = val;
+ } catch (Throwable t) {
+ int val = Integer.parseInt(DEFAULT_YK_PRECALC_MAX);
+ MAX_NUM_BUILDERS = val;
+ }
+ try {
+ int val = Integer.parseInt(ctx.getProperty(PROP_YK_PRECALC_DELAY, DEFAULT_YK_PRECALC_DELAY));
+ CALC_DELAY = val;
+ } catch (Throwable t) {
+ int val = Integer.parseInt(DEFAULT_YK_PRECALC_DELAY);
+ CALC_DELAY = val;
+ }
+
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("ElGamal YK Precalc (minimum: " + MIN_NUM_BUILDERS + " max: " + MAX_NUM_BUILDERS + ", delay: "
+ + CALC_DELAY + ")");
+
+ _precalcThread = new I2PThread(new YKPrecalcRunner(MIN_NUM_BUILDERS, MAX_NUM_BUILDERS));
+ _precalcThread.setName("YK Precalc");
+ _precalcThread.setDaemon(true);
+ _precalcThread.setPriority(Thread.MIN_PRIORITY);
+ _precalcThread.start();
+ }
+
+ private static final int getSize() {
+ synchronized (_values) {
+ return _values.size();
+ }
+ }
+
+ private static final int addValues(BigInteger yk[]) {
+ int sz = 0;
+ synchronized (_values) {
+ _values.add(yk);
+ sz = _values.size();
+ }
+ return sz;
+ }
+
+ public static BigInteger[] getNextYK() {
+ if (true) {
+ synchronized (_values) {
+ if (_values.size() > 0) {
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Sufficient precalculated YK values - fetch the existing");
+ return (BigInteger[]) _values.remove(0);
+ }
+ }
+ }
+ if (_log.shouldLog(Log.INFO)) _log.info("Insufficient precalculated YK values - create a new one");
+ return generateYK();
+ }
+
+ private final static BigInteger _two = new NativeBigInteger(1, new byte[] { 0x02});
+
+ private static final BigInteger[] generateYK() {
+ NativeBigInteger k = null;
+ BigInteger y = null;
+ long t0 = 0;
+ long t1 = 0;
+ while (k == null) {
+ t0 = Clock.getInstance().now();
+ k = new NativeBigInteger(KeyGenerator.PUBKEY_EXPONENT_SIZE, RandomSource.getInstance());
+ t1 = Clock.getInstance().now();
+ if (BigInteger.ZERO.compareTo(k) == 0) {
+ k = null;
+ continue;
+ }
+ BigInteger kPlus2 = k.add(_two);
+ if (kPlus2.compareTo(CryptoConstants.elgp) > 0) k = null;
+ }
+ long t2 = Clock.getInstance().now();
+ y = CryptoConstants.elgg.modPow(k, CryptoConstants.elgp);
+
+ BigInteger yk[] = new BigInteger[2];
+ yk[0] = y;
+ yk[1] = k;
+
+ long diff = t2 - t0;
+ if (diff > 1000) {
+ if (_log.shouldLog(Log.WARN)) _log.warn("Took too long to generate YK value for ElGamal (" + diff + "ms)");
+ }
+
+ return yk;
+ }
+
+ public static void main(String args[]) {
+ RandomSource.getInstance().nextBoolean(); // warm it up
+ try {
+ Thread.sleep(20 * 1000);
+ } catch (InterruptedException ie) { // nop
+ }
+ _log.debug("\n\n\n\nBegin test\n");
+ long negTime = 0;
+ for (int i = 0; i < 5; i++) {
+ long startNeg = Clock.getInstance().now();
+ getNextYK();
+ long endNeg = Clock.getInstance().now();
+ }
+ _log.debug("YK fetch time for 5 runs: " + negTime + " @ " + negTime / 5l + "ms each");
+ try {
+ Thread.sleep(30 * 1000);
+ } catch (InterruptedException ie) { // nop
+ }
+ }
+
+ private static class YKPrecalcRunner implements Runnable {
+ private int _minSize;
+ private int _maxSize;
+
+ private YKPrecalcRunner(int minSize, int maxSize) {
+ _minSize = minSize;
+ _maxSize = maxSize;
+ }
+
+ public void run() {
+ while (true) {
+ int curSize = 0;
+ long start = Clock.getInstance().now();
+ int startSize = getSize();
+ curSize = startSize;
+ while (curSize < _minSize) {
+ while (curSize < _maxSize) {
+ long begin = Clock.getInstance().now();
+ curSize = addValues(generateYK());
+ long end = Clock.getInstance().now();
+ if (_log.shouldLog(Log.DEBUG)) _log.debug("Precalculated YK value in " + (end - begin) + "ms");
+ // for some relief...
+ try {
+ Thread.sleep(CALC_DELAY);
+ } catch (InterruptedException ie) { // nop
+ }
+ }
+ }
+ long end = Clock.getInstance().now();
+ int numCalc = curSize - startSize;
+ if (numCalc > 0) {
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Precalced " + numCalc + " to " + curSize + " in "
+ + (end - start - CALC_DELAY * numCalc) + "ms (not counting "
+ + (CALC_DELAY * numCalc) + "ms relief). now sleeping");
+ }
+ try {
+ Thread.sleep(CHECK_DELAY);
+ } catch (InterruptedException ie) { // nop
+ }
+ }
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/data/Address.java b/src/net/i2p/data/Address.java
new file mode 100644
index 0000000..9d0358a
--- /dev/null
+++ b/src/net/i2p/data/Address.java
@@ -0,0 +1,81 @@
+package net.i2p.data;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+
+import net.i2p.util.Log;
+
+public class Address extends DataStructureImpl {
+ private final static Log _log = new Log(Address.class);
+ private String _hostname;
+ private Destination _destination;
+
+ public Address() {
+ _hostname = null;
+ _destination = null;
+ }
+
+ public String getHostname() {
+ return _hostname;
+ }
+
+ public void setHostname(String hostname) {
+ _hostname = hostname;
+ }
+
+ public Destination getDestination() {
+ return _destination;
+ }
+
+ public void setDestination(Destination destination) {
+ _destination = destination;
+ }
+
+ public void setDestination(String base64) {
+ try {
+ Destination result = new Destination();
+ result.fromBase64(base64);
+ _destination = result;
+ } catch (DataFormatException dfe) {
+ _destination = null;
+ }
+ }
+
+ public void readBytes(InputStream in) throws DataFormatException,
+ IOException {
+ _hostname = DataHelper.readString(in);
+ _destination = new Destination();
+ _destination.readBytes(in);
+ }
+
+ public void writeBytes(OutputStream out) throws DataFormatException,
+ IOException {
+ if ((_hostname == null) || (_destination == null))
+ throw new DataFormatException("Not enough data to write address");
+ DataHelper.writeString(out, _hostname);
+ _destination.writeBytes(out);
+ }
+
+ public boolean equals(Object obj) {
+ if ((obj == null) || !(obj instanceof Address)) return false;
+ Address addr = (Address) obj;
+ return DataHelper.eq(_hostname, addr.getHostname())
+ && DataHelper.eq(_destination, addr.getDestination());
+ }
+
+ public int hashCode() {
+ return DataHelper.hashCode(getHostname())
+ + DataHelper.hashCode(getDestination());
+ }
+
+ public String toString() {
+ StringBuffer buf = new StringBuffer(64);
+ buf.append("[Address: ");
+ buf.append("\n\tHostname: ").append(getHostname());
+ buf.append("\n\tDestination: ").append(getDestination());
+ buf.append("]");
+ return buf.toString();
+ }
+
+}
diff --git a/src/net/i2p/data/Base64.java b/src/net/i2p/data/Base64.java
new file mode 100644
index 0000000..a74f53a
--- /dev/null
+++ b/src/net/i2p/data/Base64.java
@@ -0,0 +1,694 @@
+package net.i2p.data;
+
+import java.io.ByteArrayOutputStream;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+
+import net.i2p.util.Log;
+
+/**
+ * Encodes and decodes to and from Base64 notation.
+ *
+ *
+ *
+ *
+ * encodeBytes( source, 0, source.length )
+ *
+ * @param source The data to convert
+ * @since 1.4
+ */
+ private static String encodeBytes(byte[] source) {
+ return encodeBytes(source, false); // don't add newlines
+ } // end encodeBytes
+
+ /**
+ * Same as encodeBytes, except uses a filesystem / URL friendly set of characters,
+ * replacing / with ~, and + with -
+ */
+ private static String safeEncode(byte[] source, int off, int len, boolean useStandardAlphabet) {
+ if (len + off > source.length)
+ throw new ArrayIndexOutOfBoundsException("Trying to encode too much! source.len=" + source.length + " off=" + off + " len=" + len);
+ StringBuffer buf = new StringBuffer(len * 4 / 3);
+ if (useStandardAlphabet)
+ encodeBytes(source, off, len, false, buf, ALPHABET);
+ else
+ encodeBytes(source, off, len, false, buf, ALPHABET_ALT);
+ return buf.toString();
+ }
+
+ /**
+ * Same as decode, except from a filesystem / URL friendly set of characters,
+ * replacing / with ~, and + with -
+ */
+ private static byte[] safeDecode(String source, boolean useStandardAlphabet) {
+ if (source == null) return null;
+ String toDecode = null;
+ if (useStandardAlphabet) {
+ toDecode = source;
+ } else {
+ toDecode = source.replace('~', '/');
+ toDecode = toDecode.replace('-', '+');
+ }
+ return standardDecode(toDecode);
+ }
+
+ /**
+ * Encodes a byte array into Base64 notation.
+ * Equivalen to calling
+ * encodeBytes( source, 0, source.length )
+ *
+ * @param source The data to convert
+ * @param breakLines Break lines at 80 characters or less.
+ * @since 1.4
+ */
+ private static String encodeBytes(byte[] source, boolean breakLines) {
+ return encodeBytes(source, 0, source.length, breakLines);
+ } // end encodeBytes
+
+ /**
+ * Encodes a byte array into Base64 notation.
+ *
+ * @param source The data to convert
+ * @param off Offset in array where conversion should begin
+ * @param len Length of data to convert
+ * @since 1.4
+ */
+ private static String encodeBytes(byte[] source, int off, int len) {
+ return encodeBytes(source, off, len, true);
+ } // end encodeBytes
+
+ private static String encodeBytes(byte[] source, int off, int len, boolean breakLines) {
+ StringBuffer buf = new StringBuffer( (len*4)/3 );
+ encodeBytes(source, off, len, breakLines, buf, ALPHABET);
+ return buf.toString();
+ }
+
+ /**
+ * Encodes a byte array into Base64 notation.
+ *
+ * @param source The data to convert
+ * @param off Offset in array where conversion should begin
+ * @param len Length of data to convert
+ * @param breakLines Break lines at 80 characters or less.
+ * @since 1.4
+ */
+ private static void encodeBytes(byte[] source, int off, int len, boolean breakLines, StringBuffer out, byte alpha[]) {
+ int len43 = len * 4 / 3;
+ //byte[] outBuff = new byte[(len43) // Main 4:3
+ // + ((len % 3) > 0 ? 4 : 0) // Account for padding
+ // + (breakLines ? (len43 / MAX_LINE_LENGTH) : 0)]; // New lines
+ int d = 0;
+ int e = 0;
+ int len2 = len - 2;
+ int lineLength = 0;
+ for (; d < len2; d += 3, e += 4) {
+ //encode3to4(source, d + off, 3, outBuff, e);
+ encode3to4(source, d + off, 3, out, alpha);
+
+ lineLength += 4;
+ if (breakLines && lineLength == MAX_LINE_LENGTH) {
+ //outBuff[e + 4] = NEW_LINE;
+ out.append('\n');
+ e++;
+ lineLength = 0;
+ } // end if: end of line
+ } // en dfor: each piece of array
+
+ if (d < len) {
+ //encode3to4(source, d + off, len - d, outBuff, e);
+ encode3to4(source, d + off, len - d, out, alpha);
+ e += 4;
+ } // end if: some padding needed
+
+ //out.append(new String(outBuff, 0, e));
+ //return new String(outBuff, 0, e);
+ } // end encodeBytes
+
+ /**
+ * Encodes a string in Base64 notation with line breaks
+ * after every 75 Base64 characters.
+ *
+ * @param s the string to encode
+ * @return the encoded string
+ * @since 1.3
+ */
+ private static String encodeString(String s) {
+ return encodeString(s, true);
+ } // end encodeString
+
+ /**
+ * Encodes a string in Base64 notation with line breaks
+ * after every 75 Base64 characters.
+ *
+ * @param s the string to encode
+ * @param breakLines Break lines at 80 characters or less.
+ * @return the encoded string
+ * @since 1.3
+ */
+ private static String encodeString(String s, boolean breakLines) {
+ byte src[] = new byte[s.length()];
+ for (int i = 0; i < src.length; i++)
+ src[i] = (byte)(s.charAt(i) & 0xFF);
+ return encodeBytes(src, breakLines);
+ } // end encodeString
+
+ /* ******** D E C O D I N G M E T H O D S ******** */
+
+ /**
+ * Decodes the first four bytes of array fourBytes
+ * and returns an array up to three bytes long with the
+ * decoded values.
+ *
+ * @param fourBytes the array with Base64 content
+ * @return array with decoded values
+ * @since 1.3
+ */
+ private static byte[] decode4to3(byte[] fourBytes) {
+ byte[] outBuff1 = new byte[3];
+ int count = decode4to3(fourBytes, 0, outBuff1, 0);
+ byte[] outBuff2 = new byte[count];
+
+ for (int i = 0; i < count; i++)
+ outBuff2[i] = outBuff1[i];
+
+ return outBuff2;
+ }
+
+ /**
+ * Decodes four bytes from array source
+ * and writes the resulting bytes (up to three of them)
+ * to destination.
+ * The source and destination arrays can be manipulated
+ * anywhere along their length by specifying
+ * srcOffset and destOffset.
+ * This method does not check to make sure your arrays
+ * are large enough to accomodate srcOffset + 4 for
+ * the source array or destOffset + 3 for
+ * the destination array.
+ * This method returns the actual number of bytes that
+ * were converted from the Base64 encoding.
+ *
+ *
+ * @param source the array to convert
+ * @param srcOffset the index where conversion begins
+ * @param destination the array to hold the conversion
+ * @param destOffset the index where output will be put
+ * @return the number of decoded bytes converted
+ * @since 1.3
+ */
+ private static int decode4to3(byte[] source, int srcOffset, byte[] destination, int destOffset) {
+ // Example: Dk==
+ if (source[srcOffset + 2] == EQUALS_SIGN) {
+ // Two ways to do the same thing. Don't know which way I like best.
+ //int outBuff = ( ( DECODABET[ source[ srcOffset ] ] << 24 ) >>> 6 )
+ // | ( ( DECODABET[ source[ srcOffset + 1] ] << 24 ) >>> 12 );
+ int outBuff = ((DECODABET[source[srcOffset]] & 0xFF) << 18)
+ | ((DECODABET[source[srcOffset + 1]] & 0xFF) << 12);
+
+ destination[destOffset] = (byte) (outBuff >>> 16);
+ return 1;
+ }
+
+ // Example: DkL=
+ else if (source[srcOffset + 3] == EQUALS_SIGN) {
+ // Two ways to do the same thing. Don't know which way I like best.
+ //int outBuff = ( ( DECODABET[ source[ srcOffset ] ] << 24 ) >>> 6 )
+ // | ( ( DECODABET[ source[ srcOffset + 1 ] ] << 24 ) >>> 12 )
+ // | ( ( DECODABET[ source[ srcOffset + 2 ] ] << 24 ) >>> 18 );
+ int outBuff = ((DECODABET[source[srcOffset]] & 0xFF) << 18)
+ | ((DECODABET[source[srcOffset + 1]] & 0xFF) << 12)
+ | ((DECODABET[source[srcOffset + 2]] & 0xFF) << 6);
+
+ destination[destOffset] = (byte) (outBuff >>> 16);
+ destination[destOffset + 1] = (byte) (outBuff >>> 8);
+ return 2;
+ }
+
+ // Example: DkLE
+ else {
+ try {
+ // Two ways to do the same thing. Don't know which way I like best.
+ //int outBuff = ( ( DECODABET[ source[ srcOffset ] ] << 24 ) >>> 6 )
+ // | ( ( DECODABET[ source[ srcOffset + 1 ] ] << 24 ) >>> 12 )
+ // | ( ( DECODABET[ source[ srcOffset + 2 ] ] << 24 ) >>> 18 )
+ // | ( ( DECODABET[ source[ srcOffset + 3 ] ] << 24 ) >>> 24 );
+ int outBuff = ((DECODABET[source[srcOffset]] & 0xFF) << 18)
+ | ((DECODABET[source[srcOffset + 1]] & 0xFF) << 12)
+ | ((DECODABET[source[srcOffset + 2]] & 0xFF) << 6)
+ | ((DECODABET[source[srcOffset + 3]] & 0xFF));
+
+ destination[destOffset] = (byte) (outBuff >> 16);
+ destination[destOffset + 1] = (byte) (outBuff >> 8);
+ destination[destOffset + 2] = (byte) (outBuff);
+
+ return 3;
+ } catch (Exception e) {
+ System.out.println("" + source[srcOffset] + ": " + (DECODABET[source[srcOffset]]));
+ System.out.println("" + source[srcOffset + 1] + ": " + (DECODABET[source[srcOffset + 1]]));
+ System.out.println("" + source[srcOffset + 2] + ": " + (DECODABET[source[srcOffset + 2]]));
+ System.out.println("" + source[srcOffset + 3] + ": " + (DECODABET[source[srcOffset + 3]]));
+ return -1;
+ } //e nd catch
+ }
+ } // end decodeToBytes
+
+ /**
+ * Decodes data from Base64 notation.
+ *
+ * @param s the string to decode
+ * @return the decoded data
+ * @since 1.4
+ */
+ private static byte[] standardDecode(String s) {
+ byte[] bytes = new byte[s.length()];
+ for (int i = 0; i < bytes.length; i++)
+ bytes[i] = (byte)(s.charAt(i) & 0xFF);
+ return decode(bytes, 0, bytes.length);
+ } // end decode
+
+ /**
+ * Decodes data from Base64 notation and
+ * returns it as a string.
+ * Equivlaent to calling
+ * new String( decode( s ) )
+ *
+ * @param s the strind to decode
+ * @return The data as a string
+ * @since 1.4
+ */
+ public static String decodeToString(String s) {
+ return new String(decode(s));
+ } // end decodeToString
+
+ /**
+ * Decodes Base64 content in byte array format and returns
+ * the decoded byte array.
+ *
+ * @param source The Base64 encoded data
+ * @param off The offset of where to begin decoding
+ * @param len The length of characters to decode
+ * @return decoded data
+ * @since 1.3
+ */
+ private static byte[] decode(byte[] source, int off, int len) {
+ int len34 = len * 3 / 4;
+ byte[] outBuff = new byte[len34]; // Upper limit on size of output
+ int outBuffPosn = 0;
+
+ byte[] b4 = new byte[4];
+ int b4Posn = 0;
+ int i = 0;
+ byte sbiCrop = 0;
+ byte sbiDecode = 0;
+ for (i = 0; i < len; i++) {
+ sbiCrop = (byte) (source[i] & 0x7f); // Only the low seven bits
+ sbiDecode = DECODABET[sbiCrop];
+
+ if (sbiDecode >= WHITE_SPACE_ENC) // White space, Equals sign or better
+ {
+ if (sbiDecode >= EQUALS_SIGN_ENC) {
+ b4[b4Posn++] = sbiCrop;
+ if (b4Posn > 3) {
+ outBuffPosn += decode4to3(b4, 0, outBuff, outBuffPosn);
+ b4Posn = 0;
+
+ // If that was the equals sign, break out of 'for' loop
+ if (sbiCrop == EQUALS_SIGN) break;
+ } // end if: quartet built
+
+ } // end if: equals sign or better
+
+ } // end if: white space, equals sign or better
+ else {
+ _log.warn("Bad Base64 input character at " + i + ": " + source[i] + "(decimal)");
+ return null;
+ } // end else:
+ } // each input character
+
+ byte[] out = new byte[outBuffPosn];
+ System.arraycopy(outBuff, 0, out, 0, outBuffPosn);
+ return out;
+ } // end decode
+} // end class Base64
diff --git a/src/net/i2p/data/ByteArray.java b/src/net/i2p/data/ByteArray.java
new file mode 100644
index 0000000..5dbed12
--- /dev/null
+++ b/src/net/i2p/data/ByteArray.java
@@ -0,0 +1,93 @@
+package net.i2p.data;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.io.Serializable;
+import net.i2p.data.Base64;
+
+/**
+ * Wrap up an array of bytes so that they can be compared and placed in hashes,
+ * maps, and the like.
+ *
+ */
+public class ByteArray implements Serializable, Comparable {
+ private byte[] _data;
+ private int _valid;
+ private int _offset;
+
+ public ByteArray() {
+ this(null);
+ }
+
+ public ByteArray(byte[] data) {
+ _offset = 0;
+ _data = data;
+ _valid = (data != null ? data.length : 0);
+ }
+ public ByteArray(byte[] data, int offset, int length) {
+ _data = data;
+ _offset = offset;
+ _valid = length;
+ }
+
+ public byte[] getData() {
+ return _data;
+ }
+
+ public void setData(byte[] data) {
+ _data = data;
+ }
+
+ /**
+ * Count how many of the bytes in the array are 'valid'.
+ * this property does not necessarily have meaning for all byte
+ * arrays.
+ */
+ public int getValid() { return _valid; }
+ public void setValid(int valid) { _valid = valid; }
+ public int getOffset() { return _offset; }
+ public void setOffset(int offset) { _offset = offset; }
+
+ public final boolean equals(Object o) {
+ if (o == null) return false;
+ if (o instanceof ByteArray) {
+ ByteArray ba = (ByteArray)o;
+ return compare(getData(), _offset, _valid, ba.getData(), ba.getOffset(), ba.getValid());
+ }
+
+ try {
+ byte val[] = (byte[]) o;
+ return compare(getData(), _offset, _valid, val, 0, val.length);
+ } catch (Throwable t) {
+ return false;
+ }
+ }
+
+ private static final boolean compare(byte[] lhs, int loff, int llen, byte[] rhs, int roff, int rlen) {
+ return (llen == rlen) && DataHelper.eq(lhs, loff, rhs, roff, llen);
+ }
+
+ public final int compareTo(Object obj) {
+ if (obj.getClass() != getClass()) throw new ClassCastException("invalid object: " + obj);
+ return DataHelper.compareTo(_data, ((ByteArray)obj).getData());
+ }
+
+ public final int hashCode() {
+ return DataHelper.hashCode(getData());
+ }
+
+ public String toString() {
+ return super.toString() + "/" + DataHelper.toString(getData(), 32) + "." + _valid;
+ }
+
+ public final String toBase64() {
+ return Base64.encode(_data, _offset, _valid);
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/data/Certificate.java b/src/net/i2p/data/Certificate.java
new file mode 100644
index 0000000..89a5aca
--- /dev/null
+++ b/src/net/i2p/data/Certificate.java
@@ -0,0 +1,167 @@
+package net.i2p.data;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+
+import net.i2p.util.Log;
+
+/**
+ * Defines a certificate that can be attached to various I2P structures, such
+ * as RouterIdentity and Destination, allowing routers and clients to help
+ * manage denial of service attacks and the network utilization. Certificates
+ * can even be defined to include identifiable information signed by some
+ * certificate authority, though that use probably isn't appropriate for an
+ * anonymous network ;)
+ *
+ * @author jrandom
+ */
+public class Certificate extends DataStructureImpl {
+ private final static Log _log = new Log(Certificate.class);
+ private int _type;
+ private byte[] _payload;
+
+ /** Specifies a null certificate type with no payload */
+ public final static int CERTIFICATE_TYPE_NULL = 0;
+ /** specifies a Hashcash style certificate */
+ public final static int CERTIFICATE_TYPE_HASHCASH = 1;
+ /** we should not be used for anything (don't use us in the netDb, in tunnels, or tell others about us) */
+ public final static int CERTIFICATE_TYPE_HIDDEN = 2;
+
+ public Certificate() {
+ _type = 0;
+ _payload = null;
+ }
+
+ public Certificate(int type, byte[] payload) {
+ _type = type;
+ _payload = payload;
+ }
+
+ /** */
+ public int getCertificateType() {
+ return _type;
+ }
+
+ public void setCertificateType(int type) {
+ _type = type;
+ }
+
+ public byte[] getPayload() {
+ return _payload;
+ }
+
+ public void setPayload(byte[] payload) {
+ _payload = payload;
+ }
+
+ public void readBytes(InputStream in) throws DataFormatException, IOException {
+ _type = (int) DataHelper.readLong(in, 1);
+ int length = (int) DataHelper.readLong(in, 2);
+ if (length > 0) {
+ _payload = new byte[length];
+ int read = read(in, _payload);
+ if (read != length)
+ throw new DataFormatException("Not enough bytes for the payload (read: " + read + " length: " + length
+ + ")");
+ }
+ }
+
+ public void writeBytes(OutputStream out) throws DataFormatException, IOException {
+ if (_type < 0) throw new DataFormatException("Invalid certificate type: " + _type);
+ //if ((_type != 0) && (_payload == null)) throw new DataFormatException("Payload is required for non null type");
+
+ DataHelper.writeLong(out, 1, _type);
+ if (_payload != null) {
+ DataHelper.writeLong(out, 2, _payload.length);
+ out.write(_payload);
+ } else {
+ DataHelper.writeLong(out, 2, 0L);
+ }
+ }
+
+
+ public int writeBytes(byte target[], int offset) {
+ int cur = offset;
+ DataHelper.toLong(target, cur, 1, _type);
+ cur++;
+ if (_payload != null) {
+ DataHelper.toLong(target, cur, 2, _payload.length);
+ cur += 2;
+ System.arraycopy(_payload, 0, target, cur, _payload.length);
+ cur += _payload.length;
+ } else {
+ DataHelper.toLong(target, cur, 2, 0);
+ cur += 2;
+ }
+ return cur - offset;
+ }
+
+ public int readBytes(byte source[], int offset) throws DataFormatException {
+ if (source == null) throw new DataFormatException("Cert is null");
+ if (source.length <= offset + 3)
+ throw new DataFormatException("Cert is too small [" + source.length + " off=" + offset + "]");
+
+ int cur = offset;
+ _type = (int)DataHelper.fromLong(source, cur, 1);
+ cur++;
+ int length = (int)DataHelper.fromLong(source, cur, 2);
+ cur += 2;
+ if (length > 0) {
+ if (length + cur > source.length)
+ throw new DataFormatException("Payload on the certificate is insufficient (len="
+ + source.length + " off=" + offset + " cur=" + cur
+ + " payloadLen=" + length);
+ _payload = new byte[length];
+ System.arraycopy(source, cur, _payload, 0, length);
+ cur += length;
+ }
+ return cur - offset;
+ }
+
+ public int size() {
+ return 1 + 2 + (_payload != null ? _payload.length : 0);
+ }
+
+ public boolean equals(Object object) {
+ if ((object == null) || !(object instanceof Certificate)) return false;
+ Certificate cert = (Certificate) object;
+ return getCertificateType() == cert.getCertificateType() && DataHelper.eq(getPayload(), cert.getPayload());
+ }
+
+ public int hashCode() {
+ return getCertificateType() + DataHelper.hashCode(getPayload());
+ }
+
+ public String toString() {
+ StringBuffer buf = new StringBuffer(64);
+ buf.append("[Certificate: type: ");
+ if (getCertificateType() == CERTIFICATE_TYPE_NULL)
+ buf.append("Null certificate");
+ else if (getCertificateType() == CERTIFICATE_TYPE_HASHCASH)
+ buf.append("Hashcash certificate");
+ else
+ buf.append("Unknown certificiate type (").append(getCertificateType()).append(")");
+
+ if (_payload == null) {
+ buf.append(" null payload");
+ } else {
+ buf.append(" payload size: ").append(_payload.length);
+ int len = 32;
+ if (len > _payload.length) len = _payload.length;
+ buf.append(" first ").append(len).append(" bytes: ");
+ buf.append(DataHelper.toString(_payload, len));
+ }
+ buf.append("]");
+ return buf.toString();
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/data/DataFormatException.java b/src/net/i2p/data/DataFormatException.java
new file mode 100644
index 0000000..95e0d26
--- /dev/null
+++ b/src/net/i2p/data/DataFormatException.java
@@ -0,0 +1,30 @@
+package net.i2p.data;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import net.i2p.I2PException;
+import net.i2p.util.Log;
+
+/**
+ * Thrown when the data was not available to read or write a DataStructure
+ *
+ * @author jrandom
+ */
+public class DataFormatException extends I2PException {
+ private final static Log _log = new Log(DataFormatException.class);
+
+ public DataFormatException(String msg, Throwable t) {
+ super(msg, t);
+ }
+
+ public DataFormatException(String msg) {
+ super(msg);
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/data/DataHelper.java b/src/net/i2p/data/DataHelper.java
new file mode 100644
index 0000000..6a67774
--- /dev/null
+++ b/src/net/i2p/data/DataHelper.java
@@ -0,0 +1,938 @@
+package net.i2p.data;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import gnu.crypto.hash.Sha256Standalone;
+import java.io.BufferedReader;
+import java.io.BufferedWriter;
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileWriter;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.io.OutputStream;
+import java.io.PrintWriter;
+import java.io.UnsupportedEncodingException;
+import java.math.BigInteger;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Date;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Properties;
+import java.util.TreeMap;
+import java.util.zip.GZIPInputStream;
+
+import net.i2p.util.ByteCache;
+import net.i2p.util.CachingByteArrayOutputStream;
+import net.i2p.util.OrderedProperties;
+import net.i2p.util.ReusableGZIPInputStream;
+import net.i2p.util.ReusableGZIPOutputStream;
+
+/**
+ * Defines some simple IO routines for dealing with marshalling data structures
+ *
+ * @author jrandom
+ */
+public class DataHelper {
+ private final static byte _equalBytes[] = "=".getBytes(); // in UTF-8
+ private final static byte _semicolonBytes[] = ";".getBytes(); // in UTF-8
+
+ /** Read a mapping from the stream, as defined by the I2P data structure spec,
+ * and store it into a Properties object.
+ *
+ * A mapping is a set of key / value pairs. It starts with a 2 byte Integer (ala readLong(rawStream, 2))
+ * defining how many bytes make up the mapping. After that comes that many bytes making
+ * up a set of UTF-8 encoded characters. The characters are organized as key=value;.
+ * The key is a String (ala readString(rawStream)) unique as a key within the current
+ * mapping that does not include the UTF-8 characters '=' or ';'. After the key
+ * comes the literal UTF-8 character '='. After that comes a String (ala readString(rawStream))
+ * for the value. Finally after that comes the literal UTF-8 character ';'. This key=value;
+ * is repeated until there are no more bytes (not characters!) left as defined by the
+ * first two byte integer.
+ * @param rawStream stream to read the mapping from
+ * @throws DataFormatException if the format is invalid
+ * @throws IOException if there is a problem reading the data
+ * @return mapping
+ */
+ public static Properties readProperties(InputStream rawStream)
+ throws DataFormatException, IOException {
+ Properties props = new OrderedProperties();
+ long size = readLong(rawStream, 2);
+ byte data[] = new byte[(int) size];
+ int read = read(rawStream, data);
+ if (read != size) throw new DataFormatException("Not enough data to read the properties");
+ ByteArrayInputStream in = new ByteArrayInputStream(data);
+ byte eqBuf[] = new byte[_equalBytes.length];
+ byte semiBuf[] = new byte[_semicolonBytes.length];
+ while (in.available() > 0) {
+ String key = readString(in);
+ read = read(in, eqBuf);
+ if ((read != eqBuf.length) || (!eq(eqBuf, _equalBytes))) {
+ break;
+ }
+ String val = readString(in);
+ read = read(in, semiBuf);
+ if ((read != semiBuf.length) || (!eq(semiBuf, _semicolonBytes))) {
+ break;
+ }
+ props.put(key, val);
+ }
+ return props;
+ }
+
+ /**
+ * Write a mapping to the stream, as defined by the I2P data structure spec,
+ * and store it into a Properties object. See readProperties for the format.
+ *
+ * @param rawStream stream to write to
+ * @param props properties to write out
+ * @throws DataFormatException if there is not enough valid data to write out
+ * @throws IOException if there is an IO error writing out the data
+ */
+ public static void writeProperties(OutputStream rawStream, Properties props)
+ throws DataFormatException, IOException {
+ if (props != null) {
+ OrderedProperties p = new OrderedProperties();
+ p.putAll(props);
+ ByteArrayOutputStream baos = new ByteArrayOutputStream(32);
+ for (Iterator iter = p.keySet().iterator(); iter.hasNext();) {
+ String key = (String) iter.next();
+ String val = p.getProperty(key);
+ // now make sure they're in UTF-8
+ //key = new String(key.getBytes(), "UTF-8");
+ //val = new String(val.getBytes(), "UTF-8");
+ writeString(baos, key);
+ baos.write(_equalBytes);
+ writeString(baos, val);
+ baos.write(_semicolonBytes);
+ }
+ baos.close();
+ byte propBytes[] = baos.toByteArray();
+ writeLong(rawStream, 2, propBytes.length);
+ rawStream.write(propBytes);
+ } else {
+ writeLong(rawStream, 2, 0);
+ }
+ }
+
+ public static int toProperties(byte target[], int offset, Properties props) throws DataFormatException, IOException {
+ if (props != null) {
+ OrderedProperties p = new OrderedProperties();
+ p.putAll(props);
+ ByteArrayOutputStream baos = new ByteArrayOutputStream(32);
+ for (Iterator iter = p.keySet().iterator(); iter.hasNext();) {
+ String key = (String) iter.next();
+ String val = p.getProperty(key);
+ // now make sure they're in UTF-8
+ //key = new String(key.getBytes(), "UTF-8");
+ //val = new String(val.getBytes(), "UTF-8");
+ writeString(baos, key);
+ baos.write(_equalBytes);
+ writeString(baos, val);
+ baos.write(_semicolonBytes);
+ }
+ baos.close();
+ byte propBytes[] = baos.toByteArray();
+ toLong(target, offset, 2, propBytes.length);
+ offset += 2;
+ System.arraycopy(propBytes, 0, target, offset, propBytes.length);
+ offset += propBytes.length;
+ return offset;
+ } else {
+ toLong(target, offset, 2, 0);
+ return offset + 2;
+ }
+ }
+
+ public static int fromProperties(byte source[], int offset, Properties target) throws DataFormatException, IOException {
+ int size = (int)fromLong(source, offset, 2);
+ offset += 2;
+ ByteArrayInputStream in = new ByteArrayInputStream(source, offset, size);
+ byte eqBuf[] = new byte[_equalBytes.length];
+ byte semiBuf[] = new byte[_semicolonBytes.length];
+ while (in.available() > 0) {
+ String key = readString(in);
+ int read = read(in, eqBuf);
+ if ((read != eqBuf.length) || (!eq(eqBuf, _equalBytes))) {
+ break;
+ }
+ String val = readString(in);
+ read = read(in, semiBuf);
+ if ((read != semiBuf.length) || (!eq(semiBuf, _semicolonBytes))) {
+ break;
+ }
+ target.put(key, val);
+ }
+ return offset + size;
+ }
+
+ public static byte[] toProperties(Properties opts) {
+ try {
+ ByteArrayOutputStream baos = new ByteArrayOutputStream(2);
+ writeProperties(baos, opts);
+ return baos.toByteArray();
+ } catch (DataFormatException dfe) {
+ throw new RuntimeException("Format error writing to memory?! " + dfe.getMessage());
+ } catch (IOException ioe) {
+ throw new RuntimeException("IO error writing to memory?! " + ioe.getMessage());
+ }
+ }
+
+ /**
+ * Pretty print the mapping
+ *
+ */
+ public static String toString(Properties options) {
+ StringBuffer buf = new StringBuffer();
+ if (options != null) {
+ for (Iterator iter = options.keySet().iterator(); iter.hasNext();) {
+ String key = (String) iter.next();
+ String val = options.getProperty(key);
+ buf.append("[").append(key).append("] = [").append(val).append("]");
+ }
+ } else {
+ buf.append("(null properties map)");
+ }
+ return buf.toString();
+ }
+
+ /**
+ * A more efficient Properties.load
+ *
+ */
+ public static void loadProps(Properties props, File file) throws IOException {
+ loadProps(props, file, false);
+ }
+ public static void loadProps(Properties props, File file, boolean forceLowerCase) throws IOException {
+ loadProps(props, new FileInputStream(file), forceLowerCase);
+ }
+ public static void loadProps(Properties props, InputStream inStr) throws IOException {
+ loadProps(props, inStr, false);
+ }
+ public static void loadProps(Properties props, InputStream inStr, boolean forceLowerCase) throws IOException {
+ BufferedReader in = null;
+ try {
+ in = new BufferedReader(new InputStreamReader(inStr, "UTF-8"), 16*1024);
+ String line = null;
+ while ( (line = in.readLine()) != null) {
+ if (line.trim().length() <= 0) continue;
+ if (line.charAt(0) == '#') continue;
+ if (line.charAt(0) == ';') continue;
+ if (line.indexOf('#') > 0) // trim off any end of line comment
+ line = line.substring(0, line.indexOf('#')).trim();
+ int split = line.indexOf('=');
+ if (split <= 0) continue;
+ String key = line.substring(0, split);
+ String val = line.substring(split+1);
+ if ( (key.length() > 0) && (val.length() > 0) )
+ if (forceLowerCase)
+ props.setProperty(key.toLowerCase(), val);
+ else
+ props.setProperty(key, val);
+ }
+ } finally {
+ if (in != null) try { in.close(); } catch (IOException ioe) {}
+ }
+ }
+
+ public static void storeProps(Properties props, File file) throws IOException {
+ PrintWriter out = null;
+ try {
+ out = new PrintWriter(new BufferedWriter(new FileWriter(file)));
+ for (Iterator iter = props.keySet().iterator(); iter.hasNext(); ) {
+ String name = (String)iter.next();
+ String val = props.getProperty(name);
+ out.println(name + "=" + val);
+ }
+ out.flush();
+ out.close();
+ } finally {
+ if (out != null) out.close();
+ }
+ }
+
+ /**
+ * Pretty print the collection
+ *
+ */
+ public static String toString(Collection col) {
+ StringBuffer buf = new StringBuffer();
+ if (col != null) {
+ for (Iterator iter = col.iterator(); iter.hasNext();) {
+ Object o = iter.next();
+ buf.append("[").append(o).append("]");
+ if (iter.hasNext()) buf.append(", ");
+ }
+ } else {
+ buf.append("null");
+ }
+ return buf.toString();
+ }
+
+ public static String toString(byte buf[]) {
+ if (buf == null) return "";
+
+ return toString(buf, buf.length);
+ }
+
+ private static final byte[] EMPTY_BUFFER = "".getBytes();
+
+ public static String toString(byte buf[], int len) {
+ if (buf == null) buf = EMPTY_BUFFER;
+ StringBuffer out = new StringBuffer();
+ if (len > buf.length) {
+ for (int i = 0; i < len - buf.length; i++)
+ out.append("00");
+ }
+ for (int i = 0; i < buf.length && i < len; i++) {
+ StringBuffer temp = new StringBuffer(Integer.toHexString(buf[i]));
+ while (temp.length() < 2) {
+ temp.insert(0, '0');
+ }
+ temp = new StringBuffer(temp.substring(temp.length() - 2));
+ out.append(temp.toString());
+ }
+ return out.toString();
+ }
+
+ public static String toDecimalString(byte buf[], int len) {
+ if (buf == null) buf = EMPTY_BUFFER;
+ BigInteger val = new BigInteger(1, buf);
+ return val.toString(10);
+ }
+
+ public final static String toHexString(byte data[]) {
+ if ((data == null) || (data.length <= 0)) return "00";
+ BigInteger bi = new BigInteger(1, data);
+ return bi.toString(16);
+ }
+
+ public final static byte[] fromHexString(String val) {
+ BigInteger bv = new BigInteger(val, 16);
+ return bv.toByteArray();
+ }
+
+ /** Read the stream for an integer as defined by the I2P data structure specification.
+ * Integers are a fixed number of bytes (numBytes), stored as unsigned integers in network byte order.
+ * @param rawStream stream to read from
+ * @param numBytes number of bytes to read and format into a number
+ * @throws DataFormatException if the stream doesn't contain a validly formatted number of that many bytes
+ * @throws IOException if there is an IO error reading the number
+ * @return number
+ */
+ public static long readLong(InputStream rawStream, int numBytes)
+ throws DataFormatException, IOException {
+ if (numBytes > 8)
+ throw new DataFormatException("readLong doesn't currently support reading numbers > 8 bytes [as thats bigger than java's long]");
+
+ long rv = 0;
+ for (int i = 0; i < numBytes; i++) {
+ long cur = rawStream.read() & 0xFF;
+ if (cur == -1) throw new DataFormatException("Not enough bytes for the field");
+ // we loop until we find a nonzero byte (or we reach the end)
+ if (cur != 0) {
+ // ok, data found, now iterate through it to fill the rv
+ long remaining = numBytes - i;
+ for (int j = 0; j < remaining; j++) {
+ long shiftAmount = 8 * (remaining-j-1);
+ cur = cur << shiftAmount;
+ rv += cur;
+ if (j + 1 < remaining) {
+ cur = rawStream.read() & 0xFF;
+ if (cur == -1)
+ throw new DataFormatException("Not enough bytes for the field");
+ }
+ }
+ break;
+ }
+ }
+
+ return rv;
+ }
+
+ /** Write an integer as defined by the I2P data structure specification to the stream.
+ * Integers are a fixed number of bytes (numBytes), stored as unsigned integers in network byte order.
+ * @param value value to write out
+ * @param rawStream stream to write to
+ * @param numBytes number of bytes to write the number into (padding as necessary)
+ * @throws DataFormatException if the stream doesn't contain a validly formatted number of that many bytes
+ * @throws IOException if there is an IO error writing to the stream
+ */
+ public static void writeLong(OutputStream rawStream, int numBytes, long value)
+ throws DataFormatException, IOException {
+ if (value < 0) throw new DataFormatException("Value is negative (" + value + ")");
+ for (int i = numBytes - 1; i >= 0; i--) {
+ byte cur = (byte)( (value >>> (i*8) ) & 0xFF);
+ rawStream.write(cur);
+ }
+ }
+
+ public static byte[] toLong(int numBytes, long value) throws IllegalArgumentException {
+ if (value < 0) throw new IllegalArgumentException("Negative value not allowed");
+ byte val[] = new byte[numBytes];
+ toLong(val, 0, numBytes, value);
+ return val;
+ }
+
+ public static void toLong(byte target[], int offset, int numBytes, long value) throws IllegalArgumentException {
+ if (numBytes <= 0) throw new IllegalArgumentException("Invalid number of bytes");
+ if (value < 0) throw new IllegalArgumentException("Negative value not allowed");
+ for (int i = 0; i < numBytes; i++)
+ target[offset+numBytes-i-1] = (byte)(value >>> (i*8));
+ }
+
+ public static long fromLong(byte src[], int offset, int numBytes) {
+ if ( (src == null) || (src.length == 0) )
+ return 0;
+
+ long rv = 0;
+ for (int i = 0; i < numBytes; i++) {
+ long cur = src[offset+i] & 0xFF;
+ if (cur < 0) cur = cur+256;
+ cur = (cur << (8*(numBytes-i-1)));
+ rv += cur;
+ }
+ if (rv < 0)
+ throw new IllegalArgumentException("wtf, fromLong got a negative? " + rv + ": offset="+ offset +" numBytes="+numBytes);
+ return rv;
+ }
+
+ /** Read in a date from the stream as specified by the I2P data structure spec.
+ * A date is an 8 byte unsigned integer in network byte order specifying the number of
+ * milliseconds since midnight on January 1, 1970 in the GMT timezone. If the number is
+ * 0, the date is undefined or null. (yes, this means you can't represent midnight on 1/1/1970)
+ * @param in stream to read from
+ * @throws DataFormatException if the stream doesn't contain a validly formatted date
+ * @throws IOException if there is an IO error reading the date
+ * @return date read, or null
+ */
+ public static Date readDate(InputStream in) throws DataFormatException, IOException {
+ long date = readLong(in, DATE_LENGTH);
+ if (date == 0L) return null;
+
+ return new Date(date);
+ }
+
+ /** Write out a date to the stream as specified by the I2P data structure spec.
+ * @param out stream to write to
+ * @param date date to write (can be null)
+ * @throws DataFormatException if the date is not valid
+ * @throws IOException if there is an IO error writing the date
+ */
+ public static void writeDate(OutputStream out, Date date)
+ throws DataFormatException, IOException {
+ if (date == null)
+ writeLong(out, DATE_LENGTH, 0L);
+ else
+ writeLong(out, DATE_LENGTH, date.getTime());
+ }
+ public static byte[] toDate(Date date) throws IllegalArgumentException {
+ if (date == null)
+ return toLong(DATE_LENGTH, 0L);
+ else
+ return toLong(DATE_LENGTH, date.getTime());
+ }
+ public static void toDate(byte target[], int offset, long when) throws IllegalArgumentException {
+ toLong(target, offset, DATE_LENGTH, when);
+ }
+ public static Date fromDate(byte src[], int offset) throws DataFormatException {
+ if ( (src == null) || (offset + DATE_LENGTH > src.length) )
+ throw new DataFormatException("Not enough data to read a date");
+ try {
+ long when = fromLong(src, offset, DATE_LENGTH);
+ if (when <= 0)
+ return null;
+ else
+ return new Date(when);
+ } catch (IllegalArgumentException iae) {
+ throw new DataFormatException(iae.getMessage());
+ }
+ }
+
+ public static final int DATE_LENGTH = 8;
+
+ /** Read in a string from the stream as specified by the I2P data structure spec.
+ * A string is 1 or more bytes where the first byte is the number of bytes (not characters!)
+ * in the string and the remaining 0-255 bytes are the non-null terminated UTF-8 encoded character array.
+ * @param in stream to read from
+ * @throws DataFormatException if the stream doesn't contain a validly formatted string
+ * @throws IOException if there is an IO error reading the string
+ * @return UTF-8 string
+ */
+ public static String readString(InputStream in) throws DataFormatException, IOException {
+ int size = (int) readLong(in, 1);
+ byte raw[] = new byte[size];
+ int read = read(in, raw);
+ if (read != size) throw new DataFormatException("Not enough bytes to read the string");
+ return new String(raw);
+ }
+
+ /** Write out a string to the stream as specified by the I2P data structure spec. Note that the max
+ * size for a string allowed by the spec is 255 bytes.
+ *
+ * @param out stream to write string
+ * @param string string to write out: null strings are perfectly valid, but strings of excess length will
+ * cause a DataFormatException to be thrown
+ * @throws DataFormatException if the string is not valid
+ * @throws IOException if there is an IO error writing the string
+ */
+ public static void writeString(OutputStream out, String string)
+ throws DataFormatException, IOException {
+ if (string == null) {
+ writeLong(out, 1, 0);
+ } else {
+ int len = string.length();
+ if (len > 255)
+ throw new DataFormatException("The I2P data spec limits strings to 255 bytes or less, but this is "
+ + string.length() + " [" + string + "]");
+ writeLong(out, 1, len);
+ for (int i = 0; i < len; i++)
+ out.write((byte)(string.charAt(i) & 0xFF));
+ }
+ }
+
+ /** Read in a boolean as specified by the I2P data structure spec.
+ * A boolean is 1 byte that is either 0 (false), 1 (true), or 2 (null)
+ * @param in stream to read from
+ * @throws DataFormatException if the boolean is not valid
+ * @throws IOException if there is an IO error reading the boolean
+ * @return boolean value, or null
+ */
+ public static Boolean readBoolean(InputStream in) throws DataFormatException, IOException {
+ int val = (int) readLong(in, 1);
+ switch (val) {
+ case 0:
+ return Boolean.FALSE;
+ case 1:
+ return Boolean.TRUE;
+ case 2:
+ return null;
+ default:
+ throw new DataFormatException("Uhhh.. readBoolean read a value that isn't a known ternary val (0,1,2): "
+ + val);
+ }
+ }
+
+ /** Write out a boolean as specified by the I2P data structure spec.
+ * A boolean is 1 byte that is either 0 (false), 1 (true), or 2 (null)
+ * @param out stream to write to
+ * @param bool boolean value, or null
+ * @throws DataFormatException if the boolean is not valid
+ * @throws IOException if there is an IO error writing the boolean
+ */
+ public static void writeBoolean(OutputStream out, Boolean bool)
+ throws DataFormatException, IOException {
+ if (bool == null)
+ writeLong(out, 1, BOOLEAN_UNKNOWN);
+ else if (Boolean.TRUE.equals(bool))
+ writeLong(out, 1, BOOLEAN_TRUE);
+ else
+ writeLong(out, 1, BOOLEAN_FALSE);
+ }
+
+ public static Boolean fromBoolean(byte data[], int offset) {
+ if (data[offset] == BOOLEAN_TRUE)
+ return Boolean.TRUE;
+ else if (data[offset] == BOOLEAN_FALSE)
+ return Boolean.FALSE;
+ else
+ return null;
+ }
+
+ public static void toBoolean(byte data[], int offset, boolean value) {
+ data[offset] = (value ? BOOLEAN_TRUE : BOOLEAN_FALSE);
+ }
+ public static void toBoolean(byte data[], int offset, Boolean value) {
+ if (value == null)
+ data[offset] = BOOLEAN_UNKNOWN;
+ else
+ data[offset] = (value.booleanValue() ? BOOLEAN_TRUE : BOOLEAN_FALSE);
+ }
+
+ public static final byte BOOLEAN_TRUE = 0x1;
+ public static final byte BOOLEAN_FALSE = 0x0;
+ public static final byte BOOLEAN_UNKNOWN = 0x2;
+ public static final int BOOLEAN_LENGTH = 1;
+
+ //
+ // The following comparator helpers make it simpler to write consistently comparing
+ // functions for objects based on their value, not JVM memory address
+ //
+
+ /**
+ * Helper util to compare two objects, including null handling.
+ *
+ *
+ * This treats (null == null) as true, and (null == (!null)) as false.
+ */
+ public final static boolean eq(Object lhs, Object rhs) {
+ try {
+ boolean eq = (((lhs == null) && (rhs == null)) || ((lhs != null) && (lhs.equals(rhs))));
+ return eq;
+ } catch (ClassCastException cce) {
+ return false;
+ }
+ }
+
+ /**
+ * Run a deep comparison across the two collections.
+ *
+ *
+ * This treats (null == null) as true, (null == (!null)) as false, and then
+ * comparing each element via eq(object, object).
+ *
+ * If the size of the collections are not equal, the comparison returns false.
+ * The collection order should be consistent, as this simply iterates across both and compares
+ * based on the value of each at each step along the way.
+ *
+ */
+ public final static boolean eq(Collection lhs, Collection rhs) {
+ if ((lhs == null) && (rhs == null)) return true;
+ if ((lhs == null) || (rhs == null)) return false;
+ if (lhs.size() != rhs.size()) return false;
+ Iterator liter = lhs.iterator();
+ Iterator riter = rhs.iterator();
+ while ((liter.hasNext()) && (riter.hasNext()))
+ if (!(eq(liter.next(), riter.next()))) return false;
+ return true;
+ }
+
+ /**
+ * Run a comparison on the byte arrays, byte by byte.
+ *
+ * This treats (null == null) as true, (null == (!null)) as false,
+ * and unequal length arrays as false.
+ *
+ */
+ public final static boolean eq(byte lhs[], byte rhs[]) {
+ boolean eq = (((lhs == null) && (rhs == null)) || ((lhs != null) && (rhs != null) && (Arrays.equals(lhs, rhs))));
+ return eq;
+ }
+
+ /**
+ * Compare two integers, really just for consistency.
+ */
+ public final static boolean eq(int lhs, int rhs) {
+ return lhs == rhs;
+ }
+
+ /**
+ * Compare two longs, really just for consistency.
+ */
+ public final static boolean eq(long lhs, long rhs) {
+ return lhs == rhs;
+ }
+
+ /**
+ * Compare two bytes, really just for consistency.
+ */
+ public final static boolean eq(byte lhs, byte rhs) {
+ return lhs == rhs;
+ }
+
+ public final static boolean eq(byte lhs[], int offsetLeft, byte rhs[], int offsetRight, int length) {
+ if ( (lhs == null) || (rhs == null) ) return false;
+ if (length <= 0) return true;
+ for (int i = 0; i < length; i++) {
+ if (lhs[offsetLeft + i] != rhs[offsetRight + i])
+ return false;
+ }
+ return true;
+ }
+
+ public final static int compareTo(byte lhs[], byte rhs[]) {
+ if ((rhs == null) && (lhs == null)) return 0;
+ if (lhs == null) return -1;
+ if (rhs == null) return 1;
+ if (rhs.length < lhs.length) return 1;
+ if (rhs.length > lhs.length) return -1;
+ for (int i = 0; i < rhs.length; i++) {
+ if (rhs[i] > lhs[i])
+ return -1;
+ else if (rhs[i] < lhs[i]) return 1;
+ }
+ return 0;
+ }
+
+ public final static byte[] xor(byte lhs[], byte rhs[]) {
+ if ((lhs == null) || (rhs == null) || (lhs.length != rhs.length)) return null;
+ byte rv[] = new byte[lhs.length];
+
+ byte diff[] = new byte[lhs.length];
+ xor(lhs, 0, rhs, 0, diff, 0, lhs.length);
+ return diff;
+ }
+
+ /**
+ * xor the lhs with the rhs, storing the result in out.
+ *
+ * @param lhs one of the source arrays
+ * @param startLeft starting index in the lhs array to begin the xor
+ * @param rhs the other source array
+ * @param startRight starting index in the rhs array to begin the xor
+ * @param out output array
+ * @param startOut starting index in the out array to store the result
+ * @param len how many bytes into the various arrays to xor
+ */
+ public final static void xor(byte lhs[], int startLeft, byte rhs[], int startRight, byte out[], int startOut, int len) {
+ if ( (lhs == null) || (rhs == null) || (out == null) )
+ throw new NullPointerException("Invalid params to xor (" + lhs + ", " + rhs + ", " + out + ")");
+ if (lhs.length < startLeft + len)
+ throw new IllegalArgumentException("Left hand side is too short");
+ if (rhs.length < startRight + len)
+ throw new IllegalArgumentException("Right hand side is too short");
+ if (out.length < startOut + len)
+ throw new IllegalArgumentException("Result is too short");
+
+ for (int i = 0; i < len; i++)
+ out[startOut + i] = (byte) (lhs[startLeft + i] ^ rhs[startRight + i]);
+ }
+
+ //
+ // The following hashcode helpers make it simpler to write consistently hashing
+ // functions for objects based on their value, not JVM memory address
+ //
+
+ /**
+ * Calculate the hashcode of the object, using 0 for null
+ *
+ */
+ public static int hashCode(Object obj) {
+ if (obj == null) return 0;
+
+ return obj.hashCode();
+ }
+
+ /**
+ * Calculate the hashcode of the date, using 0 for null
+ *
+ */
+ public static int hashCode(Date obj) {
+ if (obj == null) return 0;
+
+ return (int) obj.getTime();
+ }
+
+ /**
+ * Calculate the hashcode of the byte array, using 0 for null
+ *
+ */
+ public static int hashCode(byte b[]) {
+ int rv = 0;
+ if (b != null) {
+ for (int i = 0; i < b.length && i < 32; i++)
+ rv += (b[i] << i);
+ }
+ return rv;
+ }
+
+ /**
+ * Calculate the hashcode of the collection, using 0 for null
+ *
+ */
+ public static int hashCode(Collection col) {
+ if (col == null) return 0;
+ int c = 0;
+ for (Iterator iter = col.iterator(); iter.hasNext();)
+ c = 7 * c + hashCode(iter.next());
+ return c;
+ }
+
+ public static int read(InputStream in, byte target[]) throws IOException {
+ return read(in, target, 0, target.length);
+ }
+ public static int read(InputStream in, byte target[], int offset, int length) throws IOException {
+ int cur = offset;
+ while (cur < length) {
+ int numRead = in.read(target, cur, length - cur);
+ if (numRead == -1) {
+ if (cur == offset) return -1; // throw new EOFException("EOF Encountered during reading");
+ return cur;
+ }
+ cur += numRead;
+ }
+ return cur;
+ }
+
+
+ /**
+ * Read a newline delimited line from the stream, returning the line (without
+ * the newline), or null if EOF reached before the newline was found
+ */
+ public static String readLine(InputStream in) throws IOException { return readLine(in, (Sha256Standalone)null); }
+ /** update the hash along the way */
+ public static String readLine(InputStream in, Sha256Standalone hash) throws IOException {
+ StringBuffer buf = new StringBuffer(128);
+ boolean ok = readLine(in, buf, hash);
+ if (ok)
+ return buf.toString();
+ else
+ return null;
+ }
+ /**
+ * Read in a line, placing it into the buffer (excluding the newline).
+ *
+ * @return true if the line was read, false if eof was reached before a
+ * newline was found
+ */
+ public static boolean readLine(InputStream in, StringBuffer buf) throws IOException {
+ return readLine(in, buf, null);
+ }
+ /** update the hash along the way */
+ public static boolean readLine(InputStream in, StringBuffer buf, Sha256Standalone hash) throws IOException {
+ int c = -1;
+ while ( (c = in.read()) != -1) {
+ if (hash != null) hash.update((byte)c);
+ if (c == '\n')
+ break;
+ buf.append((char)c);
+ }
+ if (c == -1)
+ return false;
+ else
+ return true;
+ }
+
+ public static void write(OutputStream out, byte data[], Sha256Standalone hash) throws IOException {
+ hash.update(data);
+ out.write(data);
+ }
+
+ public static List sortStructures(Collection dataStructures) {
+ if (dataStructures == null) return new ArrayList();
+ ArrayList rv = new ArrayList(dataStructures.size());
+ TreeMap tm = new TreeMap();
+ for (Iterator iter = dataStructures.iterator(); iter.hasNext();) {
+ DataStructure struct = (DataStructure) iter.next();
+ tm.put(struct.calculateHash().toString(), struct);
+ }
+ for (Iterator iter = tm.keySet().iterator(); iter.hasNext();) {
+ Object k = iter.next();
+ rv.add(tm.get(k));
+ }
+ return rv;
+ }
+
+ public static String formatDuration(long ms) {
+ if (ms < 30 * 1000) {
+ return ms + "ms";
+ } else if (ms < 5 * 60 * 1000) {
+ return (ms / 1000) + "s";
+ } else if (ms < 120 * 60 * 1000) {
+ return (ms / (60 * 1000)) + "m";
+ } else if (ms < 3 * 24 * 60 * 60 * 1000) {
+ return (ms / (60 * 60 * 1000)) + "h";
+ } else if (ms > 365l * 24l * 60l * 60l * 1000l) {
+ return "n/a";
+ } else {
+ return (ms / (24 * 60 * 60 * 1000)) + "d";
+ }
+ }
+
+ /**
+ * Strip out any HTML (simply removing any less than / greater than symbols)
+ */
+ public static String stripHTML(String orig) {
+ if (orig == null) return "";
+ String t1 = orig.replace('<', ' ');
+ String rv = t1.replace('>', ' ');
+ return rv;
+ }
+
+ private static final int MAX_UNCOMPRESSED = 40*1024;
+ /** compress the data and return a new GZIP compressed array */
+ public static byte[] compress(byte orig[]) {
+ return compress(orig, 0, orig.length);
+ }
+ public static byte[] compress(byte orig[], int offset, int size) {
+ if ((orig == null) || (orig.length <= 0)) return orig;
+ if (size >= MAX_UNCOMPRESSED)
+ throw new IllegalArgumentException("tell jrandom size=" + size);
+ ReusableGZIPOutputStream out = ReusableGZIPOutputStream.acquire();
+ try {
+ out.write(orig, offset, size);
+ out.finish();
+ out.flush();
+ byte rv[] = out.getData();
+ //if (_log.shouldLog(Log.DEBUG))
+ // _log.debug("Compression of " + orig.length + " into " + rv.length + " (or " + 100.0d
+ // * (((double) orig.length) / ((double) rv.length)) + "% savings)");
+ return rv;
+ } catch (IOException ioe) {
+ //_log.error("Error compressing?!", ioe);
+ return null;
+ } finally {
+ ReusableGZIPOutputStream.release(out);
+ }
+
+ }
+
+ /** decompress the GZIP compressed data (returning null on error) */
+ public static byte[] decompress(byte orig[]) throws IOException {
+ return (orig != null ? decompress(orig, 0, orig.length) : null);
+ }
+ public static byte[] decompress(byte orig[], int offset, int length) throws IOException {
+ if ((orig == null) || (orig.length <= 0)) return orig;
+
+ ReusableGZIPInputStream in = ReusableGZIPInputStream.acquire();
+ in.initialize(new ByteArrayInputStream(orig, offset, length));
+
+ ByteCache cache = ByteCache.getInstance(8, MAX_UNCOMPRESSED);
+ ByteArray outBuf = cache.acquire();
+ int written = 0;
+ while (true) {
+ int read = in.read(outBuf.getData(), written, MAX_UNCOMPRESSED-written);
+ if (read == -1)
+ break;
+ written += read;
+ }
+ byte rv[] = new byte[written];
+ System.arraycopy(outBuf.getData(), 0, rv, 0, written);
+ cache.release(outBuf);
+ ReusableGZIPInputStream.release(in);
+ return rv;
+ }
+
+ public static byte[] getUTF8(String orig) {
+ if (orig == null) return null;
+ try {
+ return orig.getBytes("UTF-8");
+ } catch (UnsupportedEncodingException uee) {
+ throw new RuntimeException("no utf8!?");
+ }
+ }
+ public static byte[] getUTF8(StringBuffer orig) {
+ if (orig == null) return null;
+ return getUTF8(orig.toString());
+ }
+ public static String getUTF8(byte orig[]) {
+ if (orig == null) return null;
+ try {
+ return new String(orig, "UTF-8");
+ } catch (UnsupportedEncodingException uee) {
+ throw new RuntimeException("no utf8!?");
+ }
+ }
+ public static String getUTF8(byte orig[], int offset, int len) {
+ if (orig == null) return null;
+ try {
+ return new String(orig, offset, len, "UTF-8");
+ } catch (UnsupportedEncodingException uee) {
+ throw new RuntimeException("No utf8!?");
+ }
+ }
+
+
+}
diff --git a/src/net/i2p/data/DataStructure.java b/src/net/i2p/data/DataStructure.java
new file mode 100644
index 0000000..d331eb8
--- /dev/null
+++ b/src/net/i2p/data/DataStructure.java
@@ -0,0 +1,65 @@
+package net.i2p.data;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+
+/**
+ * Defines the class as a standard object with particular bit representation,
+ * exposing methods to read and write that representation.
+ *
+ * @author jrandom
+ */
+public interface DataStructure /* extends Serializable */ {
+ /**
+ * Load up the current object with data from the given stream. Data loaded
+ * this way must match the I2P data structure specification.
+ *
+ * @param in stream to read from
+ * @throws DataFormatException if the data is improperly formatted
+ * @throws IOException if there was a problem reading the stream
+ */
+ public void readBytes(InputStream in) throws DataFormatException, IOException;
+
+ /**
+ * Write out the data structure to the stream, using the format defined in the
+ * I2P data structure specification.
+ *
+ * @param out stream to write to
+ * @throws DataFormatException if the data was incomplete or not yet ready to be written
+ * @throws IOException if there was a problem writing to the stream
+ */
+ public void writeBytes(OutputStream out) throws DataFormatException, IOException;
+
+ /**
+ * render the structure into modified base 64 notation
+ * @return null on error
+ */
+ public String toBase64();
+
+ /**
+ * Load the structure from the base 64 encoded data provided
+ *
+ */
+ public void fromBase64(String data) throws DataFormatException;
+
+ public byte[] toByteArray();
+
+ public void fromByteArray(byte data[]) throws DataFormatException;
+
+ /**
+ * Calculate the SHA256 value of this object (useful for a few scenarios)
+ *
+ * @return SHA256 hash, or null if there were problems (data format or io errors)
+ */
+ public Hash calculateHash();
+}
\ No newline at end of file
diff --git a/src/net/i2p/data/DataStructureImpl.java b/src/net/i2p/data/DataStructureImpl.java
new file mode 100644
index 0000000..fe7fe9b
--- /dev/null
+++ b/src/net/i2p/data/DataStructureImpl.java
@@ -0,0 +1,80 @@
+package net.i2p.data;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+
+import net.i2p.crypto.SHA256Generator;
+import net.i2p.util.Log;
+
+/**
+ * Base implementation of all data structures
+ *
+ * @author jrandom
+ */
+public abstract class DataStructureImpl implements DataStructure {
+ private final static Log _log = new Log(DataStructureImpl.class);
+
+ public String toBase64() {
+ byte data[] = toByteArray();
+ if (data == null)
+ return null;
+
+ return Base64.encode(data);
+ }
+
+ public void fromBase64(String data) throws DataFormatException {
+ if (data == null) throw new DataFormatException("Null data passed in");
+ byte bytes[] = Base64.decode(data);
+ fromByteArray(bytes);
+ }
+
+ public Hash calculateHash() {
+ byte data[] = toByteArray();
+ if (data != null) return SHA256Generator.getInstance().calculateHash(data);
+ return null;
+ }
+
+ public byte[] toByteArray() {
+ try {
+ ByteArrayOutputStream baos = new ByteArrayOutputStream(512);
+ writeBytes(baos);
+ return baos.toByteArray();
+ } catch (IOException ioe) {
+ _log.error("Error writing out the byte array", ioe);
+ return null;
+ } catch (DataFormatException dfe) {
+ _log.error("Error writing out the byte array", dfe);
+ return null;
+ }
+ }
+
+ public void fromByteArray(byte data[]) throws DataFormatException {
+ if (data == null) throw new DataFormatException("Null data passed in");
+ try {
+ ByteArrayInputStream bais = new ByteArrayInputStream(data);
+ readBytes(bais);
+ } catch (IOException ioe) {
+ throw new DataFormatException("Error reading the byte array", ioe);
+ }
+ }
+
+ /**
+ * Repeated reads until the buffer is full or IOException is thrown
+ *
+ * @return number of bytes read (should always equal target.length)
+ */
+ protected int read(InputStream in, byte target[]) throws IOException {
+ return DataHelper.read(in, target);
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/data/Destination.java b/src/net/i2p/data/Destination.java
new file mode 100644
index 0000000..7f190e3
--- /dev/null
+++ b/src/net/i2p/data/Destination.java
@@ -0,0 +1,177 @@
+package net.i2p.data;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.io.FileInputStream;
+
+import net.i2p.util.Log;
+
+/**
+ * Defines an end point in the I2P network. The Destination may move aroundn
+ * in the network, but messages sent to the Destination will find it
+ *
+ * @author jrandom
+ */
+public class Destination extends DataStructureImpl {
+ private final static Log _log = new Log(Destination.class);
+ private Certificate _certificate;
+ private SigningPublicKey _signingKey;
+ private PublicKey _publicKey;
+ private Hash __calculatedHash;
+
+ public Destination() {
+ setCertificate(null);
+ setSigningPublicKey(null);
+ setPublicKey(null);
+ __calculatedHash = null;
+ }
+
+ /**
+ * alternative constructor which takes a base64 string representation
+ * @param s a Base64 representation of the destination, as (eg) is used in hosts.txt
+ */
+ public Destination(String s) throws DataFormatException {
+ this();
+ fromBase64(s);
+ }
+
+ public Certificate getCertificate() {
+ return _certificate;
+ }
+
+ public void setCertificate(Certificate cert) {
+ _certificate = cert;
+ __calculatedHash = null;
+ }
+
+ public PublicKey getPublicKey() {
+ return _publicKey;
+ }
+
+ public void setPublicKey(PublicKey key) {
+ _publicKey = key;
+ __calculatedHash = null;
+ }
+
+ public SigningPublicKey getSigningPublicKey() {
+ return _signingKey;
+ }
+
+ public void setSigningPublicKey(SigningPublicKey key) {
+ _signingKey = key;
+ __calculatedHash = null;
+ }
+
+ public void readBytes(InputStream in) throws DataFormatException, IOException {
+ _publicKey = new PublicKey();
+ _publicKey.readBytes(in);
+ _signingKey = new SigningPublicKey();
+ _signingKey.readBytes(in);
+ _certificate = new Certificate();
+ _certificate.readBytes(in);
+ __calculatedHash = null;
+ }
+
+ public void writeBytes(OutputStream out) throws DataFormatException, IOException {
+ if ((_certificate == null) || (_publicKey == null) || (_signingKey == null))
+ throw new DataFormatException("Not enough data to format the destination");
+ _publicKey.writeBytes(out);
+ _signingKey.writeBytes(out);
+ _certificate.writeBytes(out);
+ }
+
+ public int writeBytes(byte target[], int offset) {
+ int cur = offset;
+ System.arraycopy(_publicKey.getData(), 0, target, cur, PublicKey.KEYSIZE_BYTES);
+ cur += PublicKey.KEYSIZE_BYTES;
+ System.arraycopy(_signingKey.getData(), 0, target, cur, SigningPublicKey.KEYSIZE_BYTES);
+ cur += SigningPublicKey.KEYSIZE_BYTES;
+ cur += _certificate.writeBytes(target, cur);
+ return cur - offset;
+ }
+
+ public int readBytes(byte source[], int offset) throws DataFormatException {
+ if (source == null) throw new DataFormatException("Null source");
+ if (source.length <= offset + PublicKey.KEYSIZE_BYTES + SigningPublicKey.KEYSIZE_BYTES)
+ throw new DataFormatException("Not enough data (len=" + source.length + " off=" + offset + ")");
+ int cur = offset;
+
+ _publicKey = new PublicKey();
+ byte buf[] = new byte[PublicKey.KEYSIZE_BYTES];
+ System.arraycopy(source, cur, buf, 0, PublicKey.KEYSIZE_BYTES);
+ _publicKey.setData(buf);
+ cur += PublicKey.KEYSIZE_BYTES;
+
+ _signingKey = new SigningPublicKey();
+ buf = new byte[SigningPublicKey.KEYSIZE_BYTES];
+ System.arraycopy(source, cur, buf, 0, SigningPublicKey.KEYSIZE_BYTES);
+ _signingKey.setData(buf);
+ cur += SigningPublicKey.KEYSIZE_BYTES;
+
+ _certificate = new Certificate();
+ cur += _certificate.readBytes(source, cur);
+
+ return cur - offset;
+ }
+
+ public int size() {
+ return PublicKey.KEYSIZE_BYTES + SigningPublicKey.KEYSIZE_BYTES + _certificate.size();
+ }
+
+ public boolean equals(Object object) {
+ if ((object == null) || !(object instanceof Destination)) return false;
+ Destination dst = (Destination) object;
+ return DataHelper.eq(getCertificate(), dst.getCertificate())
+ && DataHelper.eq(getSigningPublicKey(), dst.getSigningPublicKey())
+ && DataHelper.eq(getPublicKey(), dst.getPublicKey());
+ }
+
+ public int hashCode() {
+ return DataHelper.hashCode(getCertificate()) + DataHelper.hashCode(getSigningPublicKey())
+ + DataHelper.hashCode(getPublicKey());
+ }
+
+ public String toString() {
+ StringBuffer buf = new StringBuffer(128);
+ buf.append("[Destination: ");
+ buf.append("\n\tHash: ").append(calculateHash().toBase64());
+ buf.append("\n\tPublic Key: ").append(getPublicKey());
+ buf.append("\n\tSigning Public Key: ").append(getSigningPublicKey());
+ buf.append("\n\tCertificate: ").append(getCertificate());
+ buf.append("]");
+ return buf.toString();
+ }
+
+ public Hash calculateHash() {
+ if (__calculatedHash == null) __calculatedHash = super.calculateHash();
+ return __calculatedHash;
+ }
+
+ public static void main(String args[]) {
+ if (args.length == 0) {
+ System.err.println("Usage: Destination filename");
+ } else {
+ FileInputStream in = null;
+ try {
+ in = new FileInputStream(args[0]);
+ Destination d = new Destination();
+ d.readBytes(in);
+ System.out.println(d.toBase64());
+ } catch (Exception e) {
+ e.printStackTrace();
+ } finally {
+ if (in != null) try { in.close(); } catch (IOException ioe) {}
+ }
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/data/Hash.java b/src/net/i2p/data/Hash.java
new file mode 100644
index 0000000..2eeb9f0
--- /dev/null
+++ b/src/net/i2p/data/Hash.java
@@ -0,0 +1,266 @@
+package net.i2p.data;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.Set;
+
+import net.i2p.util.Log;
+
+/**
+ * Defines the hash as defined by the I2P data structure spec.
+ * AA hash is the SHA-256 of some data, taking up 32 bytes.
+ *
+ * @author jrandom
+ */
+public class Hash extends DataStructureImpl {
+ private final static Log _log = new Log(Hash.class);
+ private byte[] _data;
+ private volatile String _stringified;
+ private volatile String _base64ed;
+ private Map _xorCache;
+
+ public final static int HASH_LENGTH = 32;
+ public final static Hash FAKE_HASH = new Hash(new byte[HASH_LENGTH]);
+
+ private static final int MAX_CACHED_XOR = 1024;
+
+ public Hash() {
+ setData(null);
+ }
+
+ public Hash(byte data[]) {
+ setData(data);
+ }
+
+ public byte[] getData() {
+ return _data;
+ }
+
+ public void setData(byte[] data) {
+ _data = data;
+ _stringified = null;
+ _base64ed = null;
+ }
+
+ /**
+ * Prepare this hash's cache for xor values - very few hashes will need it,
+ * so we don't want to waste the memory, and lazy initialization would incur
+ * online overhead to verify the initialization.
+ *
+ */
+ public void prepareCache() {
+ synchronized (this) {
+ if (_xorCache == null)
+ _xorCache = new HashMap(MAX_CACHED_XOR);
+ }
+ }
+
+ /**
+ * Calculate the xor with the current object and the specified hash,
+ * caching values where possible. Currently this keeps up to MAX_CACHED_XOR
+ * (1024) entries, and uses an essentially random ejection policy. Later
+ * perhaps go for an LRU or FIFO?
+ *
+ * @throws IllegalStateException if you try to use the cache without first
+ * preparing this object's cache via .prepareCache()
+ */
+ public byte[] cachedXor(Hash key) throws IllegalStateException {
+ if (_xorCache == null)
+ throw new IllegalStateException("To use the cache, you must first prepare it");
+ byte[] distance = (byte[])_xorCache.get(key);
+
+ if (distance == null) {
+ // not cached, lets cache it
+ int cached = 0;
+ synchronized (_xorCache) {
+ int toRemove = _xorCache.size() + 1 - MAX_CACHED_XOR;
+ if (toRemove > 0) {
+ Set keys = new HashSet(toRemove);
+ // this removes essentially random keys - we dont maintain any sort
+ // of LRU or age. perhaps we should?
+ int removed = 0;
+ for (Iterator iter = _xorCache.keySet().iterator(); iter.hasNext() && removed < toRemove; removed++)
+ keys.add(iter.next());
+ for (Iterator iter = keys.iterator(); iter.hasNext(); )
+ _xorCache.remove(iter.next());
+ }
+ distance = DataHelper.xor(key.getData(), getData());
+ _xorCache.put(key, (Object) distance);
+ cached = _xorCache.size();
+ }
+ if (_log.shouldLog(Log.DEBUG)) {
+ // explicit buffer, since the compiler can't guess how long it'll be
+ StringBuffer buf = new StringBuffer(128);
+ buf.append("miss [").append(cached).append("] from ");
+ buf.append(DataHelper.toHexString(getData())).append(" to ");
+ buf.append(DataHelper.toHexString(key.getData()));
+ _log.debug(buf.toString(), new Exception());
+ }
+ } else {
+ if (_log.shouldLog(Log.DEBUG)) {
+ // explicit buffer, since the compiler can't guess how long it'll be
+ StringBuffer buf = new StringBuffer(128);
+ buf.append("hit from ");
+ buf.append(DataHelper.toHexString(getData())).append(" to ");
+ buf.append(DataHelper.toHexString(key.getData()));
+ _log.debug(buf.toString());
+ }
+ }
+ return distance;
+ }
+
+ public void clearXorCache() {
+ _xorCache = null;
+ }
+
+ public void readBytes(InputStream in) throws DataFormatException, IOException {
+ _data = new byte[HASH_LENGTH];
+ _stringified = null;
+ _base64ed = null;
+ int read = read(in, _data);
+ if (read != HASH_LENGTH) throw new DataFormatException("Not enough bytes to read the hash");
+ }
+
+ public void writeBytes(OutputStream out) throws DataFormatException, IOException {
+ if (_data == null) throw new DataFormatException("No data in the hash to write out");
+ if (_data.length != HASH_LENGTH) throw new DataFormatException("Invalid size of data in the private key");
+ out.write(_data);
+ }
+
+ public boolean equals(Object obj) {
+ if ((obj == null) || !(obj instanceof Hash)) return false;
+ return DataHelper.eq(_data, ((Hash) obj)._data);
+ }
+
+ public int hashCode() {
+ return DataHelper.hashCode(_data);
+ }
+
+ public String toString() {
+ if (_stringified == null) {
+ StringBuffer buf = new StringBuffer(64);
+ buf.append("[Hash: ");
+ if (_data == null) {
+ buf.append("null hash");
+ } else {
+ buf.append(toBase64());
+ }
+ buf.append("]");
+ _stringified = buf.toString();
+ }
+ return _stringified;
+ }
+
+ public String toBase64() {
+ if (_base64ed == null) {
+ _base64ed = super.toBase64();
+ }
+ return _base64ed;
+ }
+
+ public static void main(String args[]) {
+ testFill();
+ testOverflow();
+ testFillCheck();
+ }
+
+ private static void testFill() {
+ Hash local = new Hash(new byte[HASH_LENGTH]); // all zeroes
+ local.prepareCache();
+ for (int i = 0; i < MAX_CACHED_XOR; i++) {
+ byte t[] = new byte[HASH_LENGTH];
+ for (int j = 0; j < HASH_LENGTH; j++)
+ t[j] = (byte)((i >> j) & 0xFF);
+ Hash cur = new Hash(t);
+ local.cachedXor(cur);
+ if (local._xorCache.size() != i+1) {
+ _log.error("xor cache size where i=" + i + " isn't correct! size = "
+ + local._xorCache.size());
+ return;
+ }
+ }
+ _log.debug("Fill test passed");
+ }
+ private static void testOverflow() {
+ Hash local = new Hash(new byte[HASH_LENGTH]); // all zeroes
+ local.prepareCache();
+ for (int i = 0; i < MAX_CACHED_XOR*2; i++) {
+ byte t[] = new byte[HASH_LENGTH];
+ for (int j = 0; j < HASH_LENGTH; j++)
+ t[j] = (byte)((i >> j) & 0xFF);
+ Hash cur = new Hash(t);
+ local.cachedXor(cur);
+ if (i < MAX_CACHED_XOR) {
+ if (local._xorCache.size() != i+1) {
+ _log.error("xor cache size where i=" + i + " isn't correct! size = "
+ + local._xorCache.size());
+ return;
+ }
+ } else {
+ if (local._xorCache.size() > MAX_CACHED_XOR) {
+ _log.error("xor cache size where i=" + i + " isn't correct! size = "
+ + local._xorCache.size());
+ return;
+ }
+ }
+ }
+ _log.debug("overflow test passed");
+ }
+ private static void testFillCheck() {
+ Set hashes = new HashSet();
+ Hash local = new Hash(new byte[HASH_LENGTH]); // all zeroes
+ local.prepareCache();
+ // fill 'er up
+ for (int i = 0; i < MAX_CACHED_XOR; i++) {
+ byte t[] = new byte[HASH_LENGTH];
+ for (int j = 0; j < HASH_LENGTH; j++)
+ t[j] = (byte)((i >> j) & 0xFF);
+ Hash cur = new Hash(t);
+ hashes.add(cur);
+ local.cachedXor(cur);
+ if (local._xorCache.size() != i+1) {
+ _log.error("xor cache size where i=" + i + " isn't correct! size = "
+ + local._xorCache.size());
+ return;
+ }
+ }
+ // now lets recheck using those same hash objects
+ // and see if they're cached
+ for (Iterator iter = hashes.iterator(); iter.hasNext(); ) {
+ Hash cur = (Hash)iter.next();
+ if (!local._xorCache.containsKey(cur)) {
+ _log.error("checking the cache, we dont have "
+ + DataHelper.toHexString(cur.getData()));
+ return;
+ }
+ }
+ // now lets recheck with new objects but the same values
+ // and see if they'return cached
+ for (int i = 0; i < MAX_CACHED_XOR; i++) {
+ byte t[] = new byte[HASH_LENGTH];
+ for (int j = 0; j < HASH_LENGTH; j++)
+ t[j] = (byte)((i >> j) & 0xFF);
+ Hash cur = new Hash(t);
+ if (!local._xorCache.containsKey(cur)) {
+ _log.error("checking the cache, we do NOT have "
+ + DataHelper.toHexString(cur.getData()));
+ return;
+ }
+ }
+ _log.debug("Fill check test passed");
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/data/PrivateKey.java b/src/net/i2p/data/PrivateKey.java
new file mode 100644
index 0000000..6b4f923
--- /dev/null
+++ b/src/net/i2p/data/PrivateKey.java
@@ -0,0 +1,100 @@
+package net.i2p.data;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+
+import net.i2p.util.Log;
+import net.i2p.crypto.KeyGenerator;
+
+/**
+ * Defines the PrivateKey as defined by the I2P data structure spec.
+ * A private key is 256byte Integer. The private key represents only the
+ * exponent, not the primes, which are constant and defined in the crypto spec.
+ *
+ * @author jrandom
+ */
+public class PrivateKey extends DataStructureImpl {
+ private final static Log _log = new Log(PrivateKey.class);
+ private byte[] _data;
+
+ public final static int KEYSIZE_BYTES = 256;
+
+ public PrivateKey() {
+ setData(null);
+ }
+ public PrivateKey(byte data[]) { setData(data); }
+
+ /** constructs from base64
+ * @param base64Data a string of base64 data (the output of .toBase64() called
+ * on a prior instance of PrivateKey
+ */
+ public PrivateKey(String base64Data) throws DataFormatException {
+ this();
+ fromBase64(base64Data);
+ }
+
+ public byte[] getData() {
+ return _data;
+ }
+
+ public void setData(byte[] data) {
+ _data = data;
+ }
+
+ public void readBytes(InputStream in) throws DataFormatException, IOException {
+ _data = new byte[KEYSIZE_BYTES];
+ int read = read(in, _data);
+ if (read != KEYSIZE_BYTES) throw new DataFormatException("Not enough bytes to read the private key");
+ }
+
+ public void writeBytes(OutputStream out) throws DataFormatException, IOException {
+ if (_data == null) throw new DataFormatException("No data in the private key to write out");
+ if (_data.length != KEYSIZE_BYTES)
+ throw new DataFormatException("Invalid size of data in the private key [" + _data.length + "]");
+ out.write(_data);
+ }
+
+ public boolean equals(Object obj) {
+ if ((obj == null) || !(obj instanceof PrivateKey)) return false;
+ return DataHelper.eq(_data, ((PrivateKey) obj)._data);
+ }
+
+ public int hashCode() {
+ return DataHelper.hashCode(_data);
+ }
+
+ public String toString() {
+ StringBuffer buf = new StringBuffer(64);
+ buf.append("[PrivateKey: ");
+ if (_data == null) {
+ buf.append("null key");
+ } else {
+ buf.append("size: ").append(_data.length);
+ //int len = 32;
+ //if (len > _data.length) len = _data.length;
+ //buf.append(" first ").append(len).append(" bytes: ");
+ //buf.append(DataHelper.toString(_data, len));
+ }
+ buf.append("]");
+ return buf.toString();
+ }
+
+ /** derives a new PublicKey object derived from the secret contents
+ * of this PrivateKey
+ * @return a PublicKey object
+ */
+ public PublicKey toPublic() {
+ return KeyGenerator.getPublicKey(this);
+ }
+
+}
\ No newline at end of file
diff --git a/src/net/i2p/data/PublicKey.java b/src/net/i2p/data/PublicKey.java
new file mode 100644
index 0000000..2f271ac
--- /dev/null
+++ b/src/net/i2p/data/PublicKey.java
@@ -0,0 +1,94 @@
+package net.i2p.data;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+
+import net.i2p.util.Log;
+
+/**
+ * Defines the PublicKey as defined by the I2P data structure spec.
+ * A public key is 256byte Integer. The public key represents only the
+ * exponent, not the primes, which are constant and defined in the crypto spec.
+ *
+ * @author jrandom
+ */
+public class PublicKey extends DataStructureImpl {
+ private final static Log _log = new Log(PublicKey.class);
+ private byte[] _data;
+
+ public final static int KEYSIZE_BYTES = 256;
+
+ public PublicKey() {
+ setData(null);
+ }
+ public PublicKey(byte data[]) {
+ if ( (data == null) || (data.length != KEYSIZE_BYTES) )
+ throw new IllegalArgumentException("Data must be specified, and the correct size");
+ setData(data);
+ }
+
+ /** constructs from base64
+ * @param base64Data a string of base64 data (the output of .toBase64() called
+ * on a prior instance of PublicKey
+ */
+ public PublicKey(String base64Data) throws DataFormatException {
+ this();
+ fromBase64(base64Data);
+ }
+
+ public byte[] getData() {
+ return _data;
+ }
+
+ public void setData(byte[] data) {
+ _data = data;
+ }
+
+ public void readBytes(InputStream in) throws DataFormatException, IOException {
+ _data = new byte[KEYSIZE_BYTES];
+ int read = read(in, _data);
+ if (read != KEYSIZE_BYTES) throw new DataFormatException("Not enough bytes to read the public key");
+ }
+
+ public void writeBytes(OutputStream out) throws DataFormatException, IOException {
+ if (_data == null) throw new DataFormatException("No data in the public key to write out");
+ if (_data.length != KEYSIZE_BYTES) throw new DataFormatException("Invalid size of data in the public key");
+ out.write(_data);
+ }
+
+ public boolean equals(Object obj) {
+ if ((obj == null) || !(obj instanceof PublicKey)) return false;
+ return DataHelper.eq(_data, ((PublicKey) obj)._data);
+ }
+
+ public int hashCode() {
+ return DataHelper.hashCode(_data);
+ }
+
+ public String toString() {
+ StringBuffer buf = new StringBuffer(64);
+ buf.append("[PublicKey: ");
+ if (_data == null) {
+ buf.append("null key");
+ } else {
+ buf.append("size: ").append(_data.length);
+ //int len = 32;
+ //if (len > _data.length) len = _data.length;
+ //buf.append(" first ").append(len).append(" bytes: ");
+ //buf.append(DataHelper.toString(_data, len));
+ }
+ buf.append("]");
+ return buf.toString();
+ }
+
+}
\ No newline at end of file
diff --git a/src/net/i2p/data/RoutingKeyGenerator.java b/src/net/i2p/data/RoutingKeyGenerator.java
new file mode 100644
index 0000000..5589d58
--- /dev/null
+++ b/src/net/i2p/data/RoutingKeyGenerator.java
@@ -0,0 +1,134 @@
+package net.i2p.data;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.text.SimpleDateFormat;
+import java.util.Calendar;
+import java.util.Date;
+import java.util.GregorianCalendar;
+import java.util.TimeZone;
+
+import net.i2p.I2PAppContext;
+import net.i2p.crypto.SHA256Generator;
+import net.i2p.util.Log;
+import net.i2p.util.RandomSource;
+
+/**
+ * Component to manage the munging of hashes into routing keys - given a hash,
+ * perform some consistent transformation against it and return the result.
+ * This transformation is fed by the current "mod data".
+ *
+ * Right now the mod data is the current date (GMT) as a string: "yyyyMMdd",
+ * and the transformation takes the original hash, appends the bytes of that mod data,
+ * then returns the SHA256 of that concatenation.
+ *
+ * Do we want this to simply do the XOR of the SHA256 of the current mod data and
+ * the key? does that provide the randomization we need? It'd save an SHA256 op.
+ * Bah, too much effort to think about for so little gain. Other algorithms may come
+ * into play layer on about making periodic updates to the routing key for data elements
+ * to mess with Sybil. This may be good enough though.
+ *
+ * Also - the method generateDateBasedModData() should be called after midnight GMT
+ * once per day to generate the correct routing keys!
+ *
+ */
+public class RoutingKeyGenerator {
+ private Log _log;
+ private I2PAppContext _context;
+
+ public RoutingKeyGenerator(I2PAppContext context) {
+ _log = context.logManager().getLog(RoutingKeyGenerator.class);
+ _context = context;
+ }
+ public static RoutingKeyGenerator getInstance() {
+ return I2PAppContext.getGlobalContext().routingKeyGenerator();
+ }
+
+ private byte _currentModData[];
+
+ private final static Calendar _cal = GregorianCalendar.getInstance(TimeZone.getTimeZone("GMT"));
+ private final static SimpleDateFormat _fmt = new SimpleDateFormat("yyyyMMdd");
+
+ public byte[] getModData() {
+ return _currentModData;
+ }
+
+ public void setModData(byte modData[]) {
+ _currentModData = modData;
+ }
+
+ /**
+ * Update the current modifier data with some bytes derived from the current
+ * date (yyyyMMdd in GMT)
+ *
+ */
+ public void generateDateBasedModData() {
+ Date today = null;
+ long now = _context.clock().now();
+ synchronized (_cal) {
+ _cal.setTime(new Date(now));
+ _cal.set(Calendar.YEAR, _cal.get(Calendar.YEAR)); // gcj <= 4.0 workaround
+ _cal.set(Calendar.DAY_OF_YEAR, _cal.get(Calendar.DAY_OF_YEAR)); // gcj <= 4.0 workaround
+ _cal.set(Calendar.HOUR_OF_DAY, 0);
+ _cal.set(Calendar.MINUTE, 0);
+ _cal.set(Calendar.SECOND, 0);
+ _cal.set(Calendar.MILLISECOND, 0);
+ today = _cal.getTime();
+ }
+
+ byte mod[] = null;
+ String modVal = null;
+ synchronized (_fmt) {
+ modVal = _fmt.format(today);
+ }
+ mod = new byte[modVal.length()];
+ for (int i = 0; i < modVal.length(); i++)
+ mod[i] = (byte)(modVal.charAt(i) & 0xFF);
+ if (_log.shouldLog(Log.INFO))
+ _log.info("Routing modifier generated: " + modVal);
+ setModData(mod);
+ }
+
+ /**
+ * Generate a modified (yet consistent) hash from the origKey by generating the
+ * SHA256 of the targetKey with the current modData appended to it, *then*
+ *
+ * This makes Sybil's job a lot harder, as she needs to essentially take over the
+ * whole keyspace.
+ *
+ * @throws IllegalArgumentException if origKey is null
+ */
+ public Hash getRoutingKey(Hash origKey) {
+ if (origKey == null) throw new IllegalArgumentException("Original key is null");
+ if (_currentModData == null) generateDateBasedModData();
+ byte modVal[] = new byte[Hash.HASH_LENGTH + _currentModData.length];
+ System.arraycopy(origKey.getData(), 0, modVal, 0, Hash.HASH_LENGTH);
+ System.arraycopy(_currentModData, 0, modVal, Hash.HASH_LENGTH, _currentModData.length);
+ return SHA256Generator.getInstance().calculateHash(modVal);
+ }
+
+ public static void main(String args[]) {
+ Hash k1 = new Hash();
+ byte k1d[] = new byte[Hash.HASH_LENGTH];
+ RandomSource.getInstance().nextBytes(k1d);
+ k1.setData(k1d);
+
+ for (int i = 0; i < 10; i++) {
+ System.out.println("K1: " + k1);
+ Hash k1m = RoutingKeyGenerator.getInstance().getRoutingKey(k1);
+ System.out.println("MOD: " + new String(RoutingKeyGenerator.getInstance().getModData()));
+ System.out.println("K1M: " + k1m);
+ }
+ try {
+ Thread.sleep(2000);
+ } catch (Throwable t) { // nop
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/data/SessionKey.java b/src/net/i2p/data/SessionKey.java
new file mode 100644
index 0000000..bf68528
--- /dev/null
+++ b/src/net/i2p/data/SessionKey.java
@@ -0,0 +1,98 @@
+package net.i2p.data;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+
+import net.i2p.util.Log;
+
+/**
+ * Defines the SessionKey as defined by the I2P data structure spec.
+ * A session key is 32byte Integer.
+ *
+ * @author jrandom
+ */
+public class SessionKey extends DataStructureImpl {
+ private final static Log _log = new Log(SessionKey.class);
+ private byte[] _data;
+ private Object _preparedKey;
+
+ public final static int KEYSIZE_BYTES = 32;
+ public static final SessionKey INVALID_KEY = new SessionKey(new byte[KEYSIZE_BYTES]);
+
+ public SessionKey() {
+ this(null);
+ }
+ public SessionKey(byte data[]) {
+ setData(data);
+ }
+
+ public byte[] getData() {
+ return _data;
+ }
+
+ /**
+ * caveat: this method isn't synchronized with the preparedKey, so don't
+ * try to *change* the key data after already doing some
+ * encryption/decryption (or if you do change it, be sure this object isn't
+ * mid decrypt)
+ */
+ public void setData(byte[] data) {
+ _data = data;
+ _preparedKey = null;
+ }
+
+ /**
+ * retrieve an internal representation of the session key, as known
+ * by the AES engine used. this can be reused safely
+ */
+ public Object getPreparedKey() { return _preparedKey; }
+ public void setPreparedKey(Object obj) { _preparedKey = obj; }
+
+ public void readBytes(InputStream in) throws DataFormatException, IOException {
+ _data = new byte[KEYSIZE_BYTES];
+ int read = read(in, _data);
+ if (read != KEYSIZE_BYTES) throw new DataFormatException("Not enough bytes to read the session key");
+ }
+
+ public void writeBytes(OutputStream out) throws DataFormatException, IOException {
+ if (_data == null) throw new DataFormatException("No data in the session key to write out");
+ if (_data.length != KEYSIZE_BYTES) throw new DataFormatException("Invalid size of data in the private key");
+ out.write(_data);
+ }
+
+ public boolean equals(Object obj) {
+ if ((obj == null) || !(obj instanceof SessionKey)) return false;
+ return DataHelper.eq(_data, ((SessionKey) obj)._data);
+ }
+
+ public int hashCode() {
+ return DataHelper.hashCode(_data);
+ }
+
+ public String toString() {
+ if (true) return super.toString();
+ StringBuffer buf = new StringBuffer(64);
+ buf.append("[SessionKey: ");
+ if (_data == null) {
+ buf.append("null key");
+ } else {
+ buf.append("size: ").append(_data.length);
+ //int len = 32;
+ //if (len > _data.length) len = _data.length;
+ //buf.append(" first ").append(len).append(" bytes: ");
+ //buf.append(DataHelper.toString(_data, len));
+ }
+ buf.append("]");
+ return buf.toString();
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/data/SessionTag.java b/src/net/i2p/data/SessionTag.java
new file mode 100644
index 0000000..fb0886a
--- /dev/null
+++ b/src/net/i2p/data/SessionTag.java
@@ -0,0 +1,60 @@
+package net.i2p.data;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+
+import net.i2p.util.RandomSource;
+
+public class SessionTag extends ByteArray {
+ public final static int BYTE_LENGTH = 32;
+
+ public SessionTag() {
+ super();
+ }
+
+ public SessionTag(boolean create) {
+ super();
+ if (create) {
+ byte buf[] = new byte[BYTE_LENGTH];
+ RandomSource.getInstance().nextBytes(buf);
+ setData(buf);
+ }
+ }
+
+ public SessionTag(byte val[]) {
+ super();
+ setData(val);
+ }
+
+ public void setData(byte val[]) throws IllegalArgumentException {
+ if (val == null)
+ throw new NullPointerException("SessionTags cannot be null");
+ if (val.length != BYTE_LENGTH)
+ throw new IllegalArgumentException("SessionTags must be " + BYTE_LENGTH + " bytes");
+ super.setData(val);
+ setValid(BYTE_LENGTH);
+ }
+
+ public void readBytes(InputStream in) throws DataFormatException, IOException {
+ byte data[] = new byte[BYTE_LENGTH];
+ int read = DataHelper.read(in, data);
+ if (read != BYTE_LENGTH)
+ throw new DataFormatException("Not enough data (read " + read + " wanted " + BYTE_LENGTH + ")");
+ setData(data);
+ }
+
+ public void writeBytes(OutputStream out) throws DataFormatException, IOException {
+ out.write(getData());
+ }
+
+}
\ No newline at end of file
diff --git a/src/net/i2p/data/Signature.java b/src/net/i2p/data/Signature.java
new file mode 100644
index 0000000..39e2c4d
--- /dev/null
+++ b/src/net/i2p/data/Signature.java
@@ -0,0 +1,83 @@
+package net.i2p.data;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+
+import net.i2p.util.Log;
+
+/**
+ * Defines the signature as defined by the I2P data structure spec.
+ * A signature is a 40byte Integer verifying the authenticity of some data
+ * using the algorithm defined in the crypto spec.
+ *
+ * @author jrandom
+ */
+public class Signature extends DataStructureImpl {
+ private final static Log _log = new Log(Signature.class);
+ private byte[] _data;
+
+ public final static int SIGNATURE_BYTES = 40;
+ public final static byte[] FAKE_SIGNATURE = new byte[SIGNATURE_BYTES];
+ static {
+ for (int i = 0; i < SIGNATURE_BYTES; i++)
+ FAKE_SIGNATURE[i] = 0x00;
+ }
+
+ public Signature() { this(null); }
+ public Signature(byte data[]) { setData(data); }
+
+ public byte[] getData() {
+ return _data;
+ }
+
+ public void setData(byte[] data) {
+ _data = data;
+ }
+
+ public void readBytes(InputStream in) throws DataFormatException, IOException {
+ _data = new byte[SIGNATURE_BYTES];
+ int read = read(in, _data);
+ if (read != SIGNATURE_BYTES) throw new DataFormatException("Not enough bytes to read the signature");
+ }
+
+ public void writeBytes(OutputStream out) throws DataFormatException, IOException {
+ if (_data == null) throw new DataFormatException("No data in the signature to write out");
+ if (_data.length != SIGNATURE_BYTES) throw new DataFormatException("Invalid size of data in the private key");
+ out.write(_data);
+ }
+
+ public boolean equals(Object obj) {
+ if ((obj == null) || !(obj instanceof Signature)) return false;
+ return DataHelper.eq(_data, ((Signature) obj)._data);
+ }
+
+ public int hashCode() {
+ return DataHelper.hashCode(_data);
+ }
+
+ public String toString() {
+ StringBuffer buf = new StringBuffer(64);
+ buf.append("[Signature: ");
+ if (_data == null) {
+ buf.append("null signature");
+ } else {
+ buf.append("size: ").append(_data.length);
+ //int len = 32;
+ //if (len > _data.length) len = _data.length;
+ //buf.append(" first ").append(len).append(" bytes: ");
+ //buf.append(DataHelper.toString(_data, len));
+ }
+ buf.append("]");
+ return buf.toString();
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/data/SigningPrivateKey.java b/src/net/i2p/data/SigningPrivateKey.java
new file mode 100644
index 0000000..9fd65e3
--- /dev/null
+++ b/src/net/i2p/data/SigningPrivateKey.java
@@ -0,0 +1,96 @@
+package net.i2p.data;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+
+import net.i2p.util.Log;
+import net.i2p.crypto.KeyGenerator;
+
+/**
+ * Defines the SigningPrivateKey as defined by the I2P data structure spec.
+ * A private key is 256byte Integer. The private key represents only the
+ * exponent, not the primes, which are constant and defined in the crypto spec.
+ * This key varies from the PrivateKey in its usage (signing, not decrypting)
+ *
+ * @author jrandom
+ */
+public class SigningPrivateKey extends DataStructureImpl {
+ private final static Log _log = new Log(SigningPrivateKey.class);
+ private byte[] _data;
+
+ public final static int KEYSIZE_BYTES = 20;
+
+ public SigningPrivateKey() { this((byte[])null); }
+ public SigningPrivateKey(byte data[]) { setData(data); }
+
+ /** constructs from base64
+ * @param base64Data a string of base64 data (the output of .toBase64() called
+ * on a prior instance of SigningPrivateKey
+ */
+ public SigningPrivateKey(String base64Data) throws DataFormatException {
+ this();
+ fromBase64(base64Data);
+ }
+
+ public byte[] getData() {
+ return _data;
+ }
+
+ public void setData(byte[] data) {
+ _data = data;
+ }
+
+ public void readBytes(InputStream in) throws DataFormatException, IOException {
+ _data = new byte[KEYSIZE_BYTES];
+ int read = read(in, _data);
+ if (read != KEYSIZE_BYTES) throw new DataFormatException("Not enough bytes to read the private key");
+ }
+
+ public void writeBytes(OutputStream out) throws DataFormatException, IOException {
+ if (_data == null) throw new DataFormatException("No data in the private key to write out");
+ if (_data.length != KEYSIZE_BYTES) throw new DataFormatException("Invalid size of data in the private key");
+ out.write(_data);
+ }
+
+ public boolean equals(Object obj) {
+ if ((obj == null) || !(obj instanceof SigningPrivateKey)) return false;
+ return DataHelper.eq(_data, ((SigningPrivateKey) obj)._data);
+ }
+
+ public int hashCode() {
+ return DataHelper.hashCode(_data);
+ }
+
+ public String toString() {
+ StringBuffer buf = new StringBuffer(64);
+ buf.append("[SigningPrivateKey: ");
+ if (_data == null) {
+ buf.append("null key");
+ } else {
+ buf.append("size: ").append(_data.length);
+ //int len = 32;
+ //if (len > _data.length) len = _data.length;
+ //buf.append(" first ").append(len).append(" bytes: ");
+ //buf.append(DataHelper.toString(_data, len));
+ }
+ buf.append("]");
+ return buf.toString();
+ }
+
+ /** converts this signing private key to its public equivalent
+ * @return a SigningPublicKey object derived from this private key
+ */
+ public SigningPublicKey toPublic() {
+ return KeyGenerator.getSigningPublicKey(this);
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/data/SigningPublicKey.java b/src/net/i2p/data/SigningPublicKey.java
new file mode 100644
index 0000000..b938eb0
--- /dev/null
+++ b/src/net/i2p/data/SigningPublicKey.java
@@ -0,0 +1,88 @@
+package net.i2p.data;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+
+import net.i2p.util.Log;
+
+/**
+ * Defines the SigningPublicKey as defined by the I2P data structure spec.
+ * A public key is 256byte Integer. The public key represents only the
+ * exponent, not the primes, which are constant and defined in the crypto spec.
+ * This key varies from the PrivateKey in its usage (verifying signatures, not encrypting)
+ *
+ * @author jrandom
+ */
+public class SigningPublicKey extends DataStructureImpl {
+ private final static Log _log = new Log(SigningPublicKey.class);
+ private byte[] _data;
+
+ public final static int KEYSIZE_BYTES = 128;
+
+ public SigningPublicKey() { this((byte[])null); }
+ public SigningPublicKey(byte data[]) { setData(data); }
+
+ /** constructs from base64
+ * @param base64Data a string of base64 data (the output of .toBase64() called
+ * on a prior instance of SigningPublicKey
+ */
+ public SigningPublicKey(String base64Data) throws DataFormatException {
+ this();
+ fromBase64(base64Data);
+ }
+
+ public byte[] getData() {
+ return _data;
+ }
+
+ public void setData(byte[] data) {
+ _data = data;
+ }
+
+ public void readBytes(InputStream in) throws DataFormatException, IOException {
+ _data = new byte[KEYSIZE_BYTES];
+ int read = read(in, _data);
+ if (read != KEYSIZE_BYTES) throw new DataFormatException("Not enough bytes to read the public key");
+ }
+
+ public void writeBytes(OutputStream out) throws DataFormatException, IOException {
+ if (_data == null) throw new DataFormatException("No data in the public key to write out");
+ if (_data.length != KEYSIZE_BYTES) throw new DataFormatException("Invalid size of data in the public key");
+ out.write(_data);
+ }
+
+ public boolean equals(Object obj) {
+ if ((obj == null) || !(obj instanceof SigningPublicKey)) return false;
+ return DataHelper.eq(_data, ((SigningPublicKey) obj)._data);
+ }
+
+ public int hashCode() {
+ return DataHelper.hashCode(_data);
+ }
+
+ public String toString() {
+ StringBuffer buf = new StringBuffer(64);
+ buf.append("[SigningPublicKey: ");
+ if (_data == null) {
+ buf.append("null key");
+ } else {
+ buf.append("size: ").append(_data.length);
+ //int len = 32;
+ //if (len > _data.length) len = _data.length;
+ //buf.append(" first ").append(len).append(" bytes: ");
+ //buf.append(DataHelper.toString(_data, len));
+ }
+ buf.append("]");
+ return buf.toString();
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/stat/BufferedStatLog.java b/src/net/i2p/stat/BufferedStatLog.java
new file mode 100644
index 0000000..0d20737
--- /dev/null
+++ b/src/net/i2p/stat/BufferedStatLog.java
@@ -0,0 +1,210 @@
+package net.i2p.stat;
+
+import java.io.BufferedWriter;
+import java.io.FileWriter;
+import java.io.IOException;
+import java.text.SimpleDateFormat;
+
+import java.util.ArrayList;
+import java.util.Date;
+import java.util.List;
+import java.util.StringTokenizer;
+
+import net.i2p.I2PAppContext;
+import net.i2p.util.I2PThread;
+import net.i2p.util.Log;
+
+/**
+ *
+ */
+public class BufferedStatLog implements StatLog {
+ private I2PAppContext _context;
+ private Log _log;
+ private StatEvent _events[];
+ private int _eventNext;
+ private int _lastWrite;
+ /** flush stat events to disk after this many events (or 30s)*/
+ private int _flushFrequency;
+ private List _statFilters;
+ private String _lastFilters;
+ private BufferedWriter _out;
+ private String _outFile;
+ /** short circuit for adding data, set to true if some filters are set, false if its empty (so we can skip the sync) */
+ private volatile boolean _filtersSpecified;
+
+ private static final int BUFFER_SIZE = 1024;
+ private static final boolean DISABLE_LOGGING = false;
+
+ public BufferedStatLog(I2PAppContext ctx) {
+ _context = ctx;
+ _log = ctx.logManager().getLog(BufferedStatLog.class);
+ _events = new StatEvent[BUFFER_SIZE];
+ if (DISABLE_LOGGING) return;
+ for (int i = 0; i < BUFFER_SIZE; i++)
+ _events[i] = new StatEvent();
+ _eventNext = 0;
+ _lastWrite = _events.length-1;
+ _statFilters = new ArrayList(10);
+ _flushFrequency = 500;
+ _filtersSpecified = false;
+ I2PThread writer = new I2PThread(new StatLogWriter(), "StatLogWriter");
+ writer.setDaemon(true);
+ writer.start();
+ }
+
+ public void addData(String scope, String stat, long value, long duration) {
+ if (DISABLE_LOGGING) return;
+ if (!shouldLog(stat)) return;
+ synchronized (_events) {
+ _events[_eventNext].init(scope, stat, value, duration);
+ _eventNext = (_eventNext + 1) % _events.length;
+
+ if (_eventNext == _lastWrite)
+ _lastWrite = (_lastWrite + 1) % _events.length; // drop an event
+
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("AddData next=" + _eventNext + " lastWrite=" + _lastWrite);
+
+ if (_eventNext > _lastWrite) {
+ if (_eventNext - _lastWrite >= _flushFrequency)
+ _events.notifyAll();
+ } else {
+ if (_events.length - 1 - _lastWrite + _eventNext >= _flushFrequency)
+ _events.notifyAll();
+ }
+ }
+ }
+
+ private boolean shouldLog(String stat) {
+ if (!_filtersSpecified) return false;
+ synchronized (_statFilters) {
+ return _statFilters.contains(stat) || _statFilters.contains("*");
+ }
+ }
+
+ private void updateFilters() {
+ String val = _context.getProperty(StatManager.PROP_STAT_FILTER);
+ if (val != null) {
+ if ( (_lastFilters != null) && (_lastFilters.equals(val)) ) {
+ // noop
+ } else {
+ StringTokenizer tok = new StringTokenizer(val, ",");
+ synchronized (_statFilters) {
+ _statFilters.clear();
+ while (tok.hasMoreTokens())
+ _statFilters.add(tok.nextToken().trim());
+ if (_statFilters.size() > 0)
+ _filtersSpecified = true;
+ else
+ _filtersSpecified = false;
+ }
+ }
+ _lastFilters = val;
+ } else {
+ synchronized (_statFilters) {
+ _statFilters.clear();
+ _filtersSpecified = false;
+ }
+ }
+
+ String filename = _context.getProperty(StatManager.PROP_STAT_FILE);
+ if (filename == null)
+ filename = StatManager.DEFAULT_STAT_FILE;
+ if ( (_outFile != null) && (_outFile.equals(filename)) ) {
+ // noop
+ } else {
+ if (_out != null) try { _out.close(); } catch (IOException ioe) {}
+ _outFile = filename;
+ try {
+ _out = new BufferedWriter(new FileWriter(_outFile, true), 32*1024);
+ } catch (IOException ioe) { ioe.printStackTrace(); }
+ }
+ }
+
+ private class StatLogWriter implements Runnable {
+ private SimpleDateFormat _fmt = new SimpleDateFormat("yyyyMMdd HH:mm:ss.SSS");
+ public void run() {
+ int writeStart = -1;
+ int writeEnd = -1;
+ while (true) {
+ try {
+ synchronized (_events) {
+ if (_eventNext > _lastWrite) {
+ if (_eventNext - _lastWrite < _flushFrequency)
+ _events.wait(30*1000);
+ } else {
+ if (_events.length - 1 - _lastWrite + _eventNext < _flushFrequency)
+ _events.wait(30*1000);
+ }
+ writeStart = (_lastWrite + 1) % _events.length;
+ writeEnd = _eventNext;
+ _lastWrite = (writeEnd == 0 ? _events.length-1 : writeEnd - 1);
+ }
+ if (writeStart != writeEnd) {
+ try {
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("writing " + writeStart +"->"+ writeEnd);
+ writeEvents(writeStart, writeEnd);
+ } catch (Exception e) {
+ _log.error("error writing " + writeStart +"->"+ writeEnd, e);
+ }
+ }
+ } catch (InterruptedException ie) {}
+ }
+ }
+
+ private void writeEvents(int start, int end) {
+ try {
+ updateFilters();
+ int cur = start;
+ while (cur != end) {
+ //if (shouldLog(_events[cur].getStat())) {
+ String when = null;
+ synchronized (_fmt) {
+ when = _fmt.format(new Date(_events[cur].getTime()));
+ }
+ _out.write(when);
+ _out.write(" ");
+ if (_events[cur].getScope() == null)
+ _out.write("noScope");
+ else
+ _out.write(_events[cur].getScope());
+ _out.write(" ");
+ _out.write(_events[cur].getStat());
+ _out.write(" ");
+ _out.write(Long.toString(_events[cur].getValue()));
+ _out.write(" ");
+ _out.write(Long.toString(_events[cur].getDuration()));
+ _out.write("\n");
+ //}
+ cur = (cur + 1) % _events.length;
+ }
+ _out.flush();
+ } catch (IOException ioe) {
+ _log.error("Error writing out", ioe);
+ }
+ }
+ }
+
+ private class StatEvent {
+ private long _time;
+ private String _scope;
+ private String _stat;
+ private long _value;
+ private long _duration;
+
+ public long getTime() { return _time; }
+ public String getScope() { return _scope; }
+ public String getStat() { return _stat; }
+ public long getValue() { return _value; }
+ public long getDuration() { return _duration; }
+
+ public void init(String scope, String stat, long value, long duration) {
+ _scope = scope;
+ _stat = stat;
+ _value = value;
+ _duration = duration;
+ _time = _context.clock().now();
+ }
+ }
+}
diff --git a/src/net/i2p/stat/Frequency.java b/src/net/i2p/stat/Frequency.java
new file mode 100644
index 0000000..ef42108
--- /dev/null
+++ b/src/net/i2p/stat/Frequency.java
@@ -0,0 +1,170 @@
+package net.i2p.stat;
+
+/**
+ * Manage the calculation of a moving event frequency over a certain period.
+ *
+ */
+public class Frequency {
+ private double _avgInterval;
+ private double _minAverageInterval;
+ private long _period;
+ private long _lastEvent;
+ private long _start = now();
+ private long _count = 0;
+ private Object _lock = this; // new Object(); // in case we want to do fancy sync later
+
+ public Frequency(long period) {
+ setPeriod(period);
+ setLastEvent(0);
+ setAverageInterval(0);
+ setMinAverageInterval(0);
+ }
+
+ /** how long is this frequency averaged over? */
+ public long getPeriod() {
+ synchronized (_lock) {
+ return _period;
+ }
+ }
+
+ /** when did the last event occur? */
+ public long getLastEvent() {
+ synchronized (_lock) {
+ return _lastEvent;
+ }
+ }
+
+ /**
+ * on average over the last $period, after how many milliseconds are events coming in,
+ * as calculated during the last event occurrence?
+ *
+ */
+ public double getAverageInterval() {
+ synchronized (_lock) {
+ return _avgInterval;
+ }
+ }
+
+ /** what is the lowest average interval (aka most frequent) we have seen? */
+ public double getMinAverageInterval() {
+ synchronized (_lock) {
+ return _minAverageInterval;
+ }
+ }
+
+ /** calculate how many events would occur in a period given the current average */
+ public double getAverageEventsPerPeriod() {
+ synchronized (_lock) {
+ if (_avgInterval > 0) return _period / _avgInterval;
+
+ return 0;
+ }
+ }
+
+ /** calculate how many events would occur in a period given the maximum average */
+ public double getMaxAverageEventsPerPeriod() {
+ synchronized (_lock) {
+ if (_minAverageInterval > 0) return _period / _minAverageInterval;
+
+ return 0;
+ }
+ }
+
+ /** over the lifetime of this stat, without any decay or weighting, what was the average interval between events? */
+ public double getStrictAverageInterval() {
+ synchronized (_lock) {
+ long duration = now() - _start;
+ if ((duration <= 0) || (_count <= 0)) return 0;
+
+ return duration / _count;
+ }
+ }
+
+ /** using the strict average interval, how many events occur within an average period? */
+ public double getStrictAverageEventsPerPeriod() {
+ double avgInterval = getStrictAverageInterval();
+ synchronized (_lock) {
+ if (avgInterval > 0) return _period / avgInterval;
+
+ return 0;
+ }
+ }
+
+ /** how many events have occurred within the lifetime of this stat? */
+ public long getEventCount() {
+ synchronized (_lock) {
+ return _count;
+ }
+ }
+
+ /**
+ * Take note that a new event occurred, recalculating all the averages and frequencies
+ *
+ */
+ public void eventOccurred() {
+ recalculate(true);
+ }
+
+ /**
+ * Recalculate the averages
+ *
+ */
+ public void recalculate() {
+ recalculate(false);
+ }
+
+ /**
+ * Recalculate, but only update the lastEvent if eventOccurred
+ */
+ private void recalculate(boolean eventOccurred) {
+ synchronized (_lock) {
+ long now = now();
+ long interval = now - _lastEvent;
+ if (interval >= _period)
+ interval = _period - 1;
+ else if (interval <= 0) interval = 1;
+
+ double oldWeight = 1 - (interval / (float) _period);
+ double newWeight = (interval / (float) _period);
+
+ double oldInterval = _avgInterval * oldWeight;
+ double newInterval = interval * newWeight;
+ _avgInterval = oldInterval + newInterval;
+
+ if ((_avgInterval < _minAverageInterval) || (_minAverageInterval <= 0)) _minAverageInterval = _avgInterval;
+
+ if (eventOccurred) {
+ _lastEvent = now;
+ _count++;
+ }
+ }
+ }
+
+ private void setPeriod(long milliseconds) {
+ synchronized (_lock) {
+ _period = milliseconds;
+ }
+ }
+
+ private void setLastEvent(long when) {
+ synchronized (_lock) {
+ _lastEvent = when;
+ }
+ }
+
+ private void setAverageInterval(double msInterval) {
+ synchronized (_lock) {
+ _avgInterval = msInterval;
+ }
+ }
+
+ private void setMinAverageInterval(double minAverageInterval) {
+ synchronized (_lock) {
+ _minAverageInterval = minAverageInterval;
+ }
+ }
+
+ private final static long now() {
+ return System.currentTimeMillis();
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/stat/FrequencyStat.java b/src/net/i2p/stat/FrequencyStat.java
new file mode 100644
index 0000000..4e01fc2
--- /dev/null
+++ b/src/net/i2p/stat/FrequencyStat.java
@@ -0,0 +1,64 @@
+package net.i2p.stat;
+
+/** coordinate an event frequency over various periods */
+public class FrequencyStat {
+ /** unique name of the statistic */
+ private String _statName;
+ /** grouping under which the stat is kept */
+ private String _groupName;
+ /** describe the stat */
+ private String _description;
+ /** actual frequency objects for this statistic */
+ private Frequency _frequencies[];
+
+ public FrequencyStat(String name, String description, String group, long periods[]) {
+ _statName = name;
+ _description = description;
+ _groupName = group;
+ _frequencies = new Frequency[periods.length];
+ for (int i = 0; i < periods.length; i++)
+ _frequencies[i] = new Frequency(periods[i]);
+ }
+
+ /** update all of the frequencies for the various periods */
+ public void eventOccurred() {
+ for (int i = 0; i < _frequencies.length; i++)
+ _frequencies[i].eventOccurred();
+ }
+
+ /** coalesce all the stats */
+ public void coalesceStats() {
+ //for (int i = 0; i < _frequencies.length; i++)
+ // _frequencies[i].coalesceStats();
+ }
+
+ public String getName() {
+ return _statName;
+ }
+
+ public String getGroupName() {
+ return _groupName;
+ }
+
+ public String getDescription() {
+ return _description;
+ }
+
+ public long[] getPeriods() {
+ long rv[] = new long[_frequencies.length];
+ for (int i = 0; i < _frequencies.length; i++)
+ rv[i] = _frequencies[i].getPeriod();
+ return rv;
+ }
+
+ public Frequency getFrequency(long period) {
+ for (int i = 0; i < _frequencies.length; i++) {
+ if (_frequencies[i].getPeriod() == period) return _frequencies[i];
+ }
+ return null;
+ }
+
+ public int hashCode() {
+ return _statName.hashCode();
+ }
+}
diff --git a/src/net/i2p/stat/PersistenceHelper.java b/src/net/i2p/stat/PersistenceHelper.java
new file mode 100644
index 0000000..8132688
--- /dev/null
+++ b/src/net/i2p/stat/PersistenceHelper.java
@@ -0,0 +1,51 @@
+package net.i2p.stat;
+
+import java.util.Properties;
+
+import net.i2p.util.Log;
+
+/** object orientation gives you hairy palms. */
+class PersistenceHelper {
+ private final static Log _log = new Log(PersistenceHelper.class);
+ private final static String NL = System.getProperty("line.separator");
+
+ public final static void add(StringBuffer buf, String prefix, String name, String description, double value) {
+ buf.append("# ").append(prefix).append(name).append(NL);
+ buf.append("# ").append(description).append(NL);
+ buf.append(prefix).append(name).append('=').append(value).append(NL).append(NL);
+ }
+
+ public final static void add(StringBuffer buf, String prefix, String name, String description, long value) {
+ buf.append("# ").append(prefix).append(name).append(NL);
+ buf.append("# ").append(description).append(NL);
+ buf.append(prefix).append(name).append('=').append(value).append(NL).append(NL);
+ }
+
+ public final static long getLong(Properties props, String prefix, String name) {
+ String val = props.getProperty(prefix + name);
+ if (val != null) {
+ try {
+ return Long.parseLong(val);
+ } catch (NumberFormatException nfe) {
+ _log.warn("Error formatting " + val + " into a long", nfe);
+ }
+ } else {
+ _log.warn("Key " + prefix + name + " does not exist");
+ }
+ return 0;
+ }
+
+ public final static double getDouble(Properties props, String prefix, String name) {
+ String val = props.getProperty(prefix + name);
+ if (val != null) {
+ try {
+ return Double.parseDouble(val);
+ } catch (NumberFormatException nfe) {
+ _log.warn("Error formatting " + val + " into a double", nfe);
+ }
+ } else {
+ _log.warn("Key " + prefix + name + " does not exist");
+ }
+ return 0;
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/stat/Rate.java b/src/net/i2p/stat/Rate.java
new file mode 100644
index 0000000..8ac385f
--- /dev/null
+++ b/src/net/i2p/stat/Rate.java
@@ -0,0 +1,512 @@
+package net.i2p.stat;
+
+import java.io.IOException;
+import java.util.Properties;
+
+import net.i2p.util.Clock;
+import net.i2p.util.Log;
+
+/**
+ * Simple rate calculator for periodically sampled data points - determining an
+ * average value over a period, the number of events in that period, the maximum number
+ * of events (using the interval between events), and lifetime data.
+ *
+ */
+public class Rate {
+ private final static Log _log = new Log(Rate.class);
+ private volatile double _currentTotalValue;
+ private volatile long _currentEventCount;
+ private volatile long _currentTotalEventTime;
+ private volatile double _lastTotalValue;
+ private volatile long _lastEventCount;
+ private volatile long _lastTotalEventTime;
+ private volatile double _extremeTotalValue;
+ private volatile long _extremeEventCount;
+ private volatile long _extremeTotalEventTime;
+ private volatile double _lifetimeTotalValue;
+ private volatile long _lifetimeEventCount;
+ private volatile long _lifetimeTotalEventTime;
+ private RateSummaryListener _summaryListener;
+ private RateStat _stat;
+
+ private volatile long _lastCoalesceDate;
+ private long _creationDate;
+ private long _period;
+
+ /** locked during coalesce and addData */
+ private Object _lock = new Object();
+
+ /** in the current (partial) period, what is the total value acrued through all events? */
+ public double getCurrentTotalValue() {
+ return _currentTotalValue;
+ }
+
+ /** in the current (partial) period, how many events have occurred? */
+ public long getCurrentEventCount() {
+ return _currentEventCount;
+ }
+
+ /** in the current (partial) period, how much of the time has been spent doing the events? */
+ public long getCurrentTotalEventTime() {
+ return _currentTotalEventTime;
+ }
+
+ /** in the last full period, what was the total value acrued through all events? */
+ public double getLastTotalValue() {
+ return _lastTotalValue;
+ }
+
+ /** in the last full period, how many events occurred? */
+ public long getLastEventCount() {
+ return _lastEventCount;
+ }
+
+ /** in the last full period, how much of the time was spent doing the events? */
+ public long getLastTotalEventTime() {
+ return _lastTotalEventTime;
+ }
+
+ /** what was the max total value acrued in any period? */
+ public double getExtremeTotalValue() {
+ return _extremeTotalValue;
+ }
+
+ /** when the max(totalValue) was achieved, how many events occurred in that period? */
+ public long getExtremeEventCount() {
+ return _extremeEventCount;
+ }
+
+ /** when the max(totalValue) was achieved, how much of the time was spent doing the events? */
+ public long getExtremeTotalEventTime() {
+ return _extremeTotalEventTime;
+ }
+
+ /** since rate creation, what was the total value acrued through all events? */
+ public double getLifetimeTotalValue() {
+ return _lifetimeTotalValue;
+ }
+
+ /** since rate creation, how many events have occurred? */
+ public long getLifetimeEventCount() {
+ return _lifetimeEventCount;
+ }
+
+ /** since rate creation, how much of the time was spent doing the events? */
+ public long getLifetimeTotalEventTime() {
+ return _lifetimeTotalEventTime;
+ }
+
+ /** when was the rate last coalesced? */
+ public long getLastCoalesceDate() {
+ return _lastCoalesceDate;
+ }
+
+ /** when was this rate created? */
+ public long getCreationDate() {
+ return _creationDate;
+ }
+
+ /** how large should this rate's cycle be? */
+ public long getPeriod() {
+ return _period;
+ }
+
+ public RateStat getRateStat() { return _stat; }
+ public void setRateStat(RateStat rs) { _stat = rs; }
+
+ /**
+ *
+ * @param period number of milliseconds in the period this rate deals with
+ * @throws IllegalArgumentException if the period is not greater than 0
+ */
+ public Rate(long period) throws IllegalArgumentException {
+ if (period <= 0) throw new IllegalArgumentException("The period must be strictly positive");
+ _currentTotalValue = 0.0d;
+ _currentEventCount = 0;
+ _currentTotalEventTime = 0;
+ _lastTotalValue = 0.0d;
+ _lastEventCount = 0;
+ _lastTotalEventTime = 0;
+ _extremeTotalValue = 0.0d;
+ _extremeEventCount = 0;
+ _extremeTotalEventTime = 0;
+ _lifetimeTotalValue = 0.0d;
+ _lifetimeEventCount = 0;
+ _lifetimeTotalEventTime = 0;
+
+ _creationDate = now();
+ _lastCoalesceDate = _creationDate;
+ _period = period;
+ }
+
+ /**
+ * Create a new rate and load its state from the properties, taking data
+ * from the data points underneath the given prefix.
+ * (e.g. prefix = "profile.dbIntroduction.60m", this will load the associated data points such
+ * as "profile.dbIntroduction.60m.lifetimeEventCount"). The data can be exported
+ * through store(outputStream, "profile.dbIntroduction.60m").
+ *
+ * @param prefix prefix to the property entries (should NOT end with a period)
+ * @param treatAsCurrent if true, we'll treat the loaded data as if no time has
+ * elapsed since it was written out, but if it is false, we'll
+ * treat the data with as much freshness (or staleness) as appropriate.
+ * @throws IllegalArgumentException if the data was formatted incorrectly
+ */
+ public Rate(Properties props, String prefix, boolean treatAsCurrent) throws IllegalArgumentException {
+ this(1);
+ load(props, prefix, treatAsCurrent);
+ }
+
+ /** accrue the data in the current period as an instantaneous event */
+ public void addData(long value) {
+ addData(value, 0);
+ }
+
+ /**
+ * Accrue the data in the current period as if the event took the specified amount of time
+ *
+ * @param value value to accrue in the current period
+ * @param eventDuration how long it took to accrue this data (set to 0 if it was instantaneous)
+ */
+ public void addData(long value, long eventDuration) {
+ synchronized (_lock) {
+ _currentTotalValue += value;
+ _currentEventCount++;
+ _currentTotalEventTime += eventDuration;
+
+ _lifetimeTotalValue += value;
+ _lifetimeEventCount++;
+ _lifetimeTotalEventTime += eventDuration;
+ }
+ }
+
+ /** 2s is plenty of slack to deal with slow coalescing (across many stats) */
+ private static final int SLACK = 2000;
+ public void coalesce() {
+ long now = now();
+ synchronized (_lock) {
+ long measuredPeriod = now - _lastCoalesceDate;
+ if (measuredPeriod < _period - SLACK) {
+ // no need to coalesce (assuming we only try to do so once per minute)
+ if (_log.shouldLog(Log.WARN))
+ _log.warn("not coalescing, measuredPeriod = " + measuredPeriod + " period = " + _period);
+ return;
+ }
+
+ // ok ok, lets coalesce
+
+ // how much were we off by? (so that we can sample down the measured values)
+ double periodFactor = measuredPeriod / (double)_period;
+ _lastTotalValue = _currentTotalValue / periodFactor;
+ _lastEventCount = (long) ( (_currentEventCount + periodFactor - 1) / periodFactor);
+ _lastTotalEventTime = (long) (_currentTotalEventTime / periodFactor);
+ _lastCoalesceDate = now;
+
+ if (_lastTotalValue > _extremeTotalValue) {
+ _extremeTotalValue = _lastTotalValue;
+ _extremeEventCount = _lastEventCount;
+ _extremeTotalEventTime = _lastTotalEventTime;
+ }
+
+ _currentTotalValue = 0.0D;
+ _currentEventCount = 0;
+ _currentTotalEventTime = 0;
+ }
+ if (_summaryListener != null)
+ _summaryListener.add(_lastTotalValue, _lastEventCount, _lastTotalEventTime, _period);
+ }
+
+ public void setSummaryListener(RateSummaryListener listener) { _summaryListener = listener; }
+ public RateSummaryListener getSummaryListener() { return _summaryListener; }
+
+ /** what was the average value across the events in the last period? */
+ public double getAverageValue() {
+ if ((_lastTotalValue != 0) && (_lastEventCount > 0))
+ return _lastTotalValue / _lastEventCount;
+
+ return 0.0D;
+ }
+
+ /** what was the average value across the events in the most active period? */
+ public double getExtremeAverageValue() {
+ if ((_extremeTotalValue != 0) && (_extremeEventCount > 0))
+ return _extremeTotalValue / _extremeEventCount;
+
+ return 0.0D;
+ }
+
+ /** what was the average value across the events since the stat was created? */
+ public double getLifetimeAverageValue() {
+ if ((_lifetimeTotalValue != 0) && (_lifetimeEventCount > 0))
+ return _lifetimeTotalValue / _lifetimeEventCount;
+
+ return 0.0D;
+ }
+
+ /**
+ * During the last period, how much of the time was spent actually processing events in proportion
+ * to how many events could have occurred if there were no intervals?
+ *
+ * @return percentage, or 0 if event times aren't used
+ */
+ public double getLastEventSaturation() {
+ if ((_lastEventCount > 0) && (_lastTotalEventTime > 0)) {
+ /*double eventTime = (double) _lastTotalEventTime / (double) _lastEventCount;
+ double maxEvents = _period / eventTime;
+ double saturation = _lastEventCount / maxEvents;
+ return saturation;
+ */
+ return ((double)_lastTotalEventTime) / (double)_period;
+ }
+
+ return 0.0D;
+ }
+
+ /**
+ * During the extreme period, how much of the time was spent actually processing events
+ * in proportion to how many events could have occurred if there were no intervals?
+ *
+ * @return percentage, or 0 if the statistic doesn't use event times
+ */
+ public double getExtremeEventSaturation() {
+ if ((_extremeEventCount > 0) && (_extremeTotalEventTime > 0)) {
+ double eventTime = (double) _extremeTotalEventTime / (double) _extremeEventCount;
+ double maxEvents = _period / eventTime;
+ return _extremeEventCount / maxEvents;
+ }
+ return 0.0D;
+ }
+
+ /**
+ * During the lifetime of this stat, how much of the time was spent actually processing events in proportion
+ * to how many events could have occurred if there were no intervals?
+ *
+ * @return percentage, or 0 if event times aren't used
+ */
+ public double getLifetimeEventSaturation() {
+ if ((_lastEventCount > 0) && (_lifetimeTotalEventTime > 0)) {
+ double eventTime = (double) _lifetimeTotalEventTime / (double) _lifetimeEventCount;
+ double maxEvents = _period / eventTime;
+ double numPeriods = getLifetimePeriods();
+ double avgEventsPerPeriod = _lifetimeEventCount / numPeriods;
+ return avgEventsPerPeriod / maxEvents;
+ }
+ return 0.0D;
+ }
+
+ /** how many periods have we already completed? */
+ public long getLifetimePeriods() {
+ long lifetime = now() - _creationDate;
+ double periods = lifetime / (double) _period;
+ return (long) Math.floor(periods);
+ }
+
+ /**
+ * using the last period's rate, what is the total value that could have been sent
+ * if events were constant?
+ *
+ * @return max total value, or 0 if event times aren't used
+ */
+ public double getLastSaturationLimit() {
+ if ((_lastTotalValue != 0) && (_lastEventCount > 0) && (_lastTotalEventTime > 0)) {
+ double saturation = getLastEventSaturation();
+ if (saturation != 0.0D) return _lastTotalValue / saturation;
+
+ return 0.0D;
+ }
+ return 0.0D;
+ }
+
+ /**
+ * using the extreme period's rate, what is the total value that could have been
+ * sent if events were constant?
+ *
+ * @return event total at saturation, or 0 if no event times are measured
+ */
+ public double getExtremeSaturationLimit() {
+ if ((_extremeTotalValue != 0) && (_extremeEventCount > 0) && (_extremeTotalEventTime > 0)) {
+ double saturation = getExtremeEventSaturation();
+ if (saturation != 0.0d) return _extremeTotalValue / saturation;
+
+ return 0.0D;
+ }
+
+ return 0.0D;
+ }
+
+ /**
+ * How large was the last period's value as compared to the largest period ever?
+ *
+ */
+ public double getPercentageOfExtremeValue() {
+ if ((_lastTotalValue != 0) && (_extremeTotalValue != 0))
+ return _lastTotalValue / _extremeTotalValue;
+
+ return 0.0D;
+ }
+
+ /**
+ * How large was the last period's value as compared to the lifetime average value?
+ *
+ */
+ public double getPercentageOfLifetimeValue() {
+ if ((_lastTotalValue != 0) && (_lifetimeTotalValue != 0)) {
+ double lifetimePeriodValue = _period * (_lifetimeTotalValue / (now() - _creationDate));
+ return _lastTotalValue / lifetimePeriodValue;
+ }
+
+ return 0.0D;
+ }
+
+ public void store(String prefix, StringBuffer buf) throws IOException {
+ PersistenceHelper.add(buf, prefix, ".period", "Number of milliseconds in the period", _period);
+ PersistenceHelper.add(buf, prefix, ".creationDate",
+ "When was this rate created? (milliseconds since the epoch, GMT)", _creationDate);
+ PersistenceHelper.add(buf, prefix, ".lastCoalesceDate",
+ "When did we last coalesce this rate? (milliseconds since the epoch, GMT)",
+ _lastCoalesceDate);
+ PersistenceHelper.add(buf, prefix, ".currentDate",
+ "When did this data get written? (milliseconds since the epoch, GMT)", now());
+ PersistenceHelper.add(buf, prefix, ".currentTotalValue",
+ "Total value of data points in the current (uncoalesced) period", _currentTotalValue);
+ PersistenceHelper
+ .add(buf, prefix, ".currentEventCount",
+ "How many events have occurred in the current (uncoalesced) period?", _currentEventCount);
+ PersistenceHelper.add(buf, prefix, ".currentTotalEventTime",
+ "How many milliseconds have the events in the current (uncoalesced) period consumed?",
+ _currentTotalEventTime);
+ PersistenceHelper.add(buf, prefix, ".lastTotalValue",
+ "Total value of data points in the most recent (coalesced) period", _lastTotalValue);
+ PersistenceHelper.add(buf, prefix, ".lastEventCount",
+ "How many events have occurred in the most recent (coalesced) period?", _lastEventCount);
+ PersistenceHelper.add(buf, prefix, ".lastTotalEventTime",
+ "How many milliseconds have the events in the most recent (coalesced) period consumed?",
+ _lastTotalEventTime);
+ PersistenceHelper.add(buf, prefix, ".extremeTotalValue",
+ "Total value of data points in the most extreme period", _extremeTotalValue);
+ PersistenceHelper.add(buf, prefix, ".extremeEventCount",
+ "How many events have occurred in the most extreme period?", _extremeEventCount);
+ PersistenceHelper.add(buf, prefix, ".extremeTotalEventTime",
+ "How many milliseconds have the events in the most extreme period consumed?",
+ _extremeTotalEventTime);
+ PersistenceHelper.add(buf, prefix, ".lifetimeTotalValue",
+ "Total value of data points since this stat was created", _lifetimeTotalValue);
+ PersistenceHelper.add(buf, prefix, ".lifetimeEventCount",
+ "How many events have occurred since this stat was created?", _lifetimeEventCount);
+ PersistenceHelper.add(buf, prefix, ".lifetimeTotalEventTime",
+ "How many milliseconds have the events since this stat was created consumed?",
+ _lifetimeTotalEventTime);
+ }
+
+ /**
+ * Load this rate from the properties, taking data from the data points
+ * underneath the given prefix.
+ *
+ * @param prefix prefix to the property entries (should NOT end with a period)
+ * @param treatAsCurrent if true, we'll treat the loaded data as if no time has
+ * elapsed since it was written out, but if it is false, we'll
+ * treat the data with as much freshness (or staleness) as appropriate.
+ * @throws IllegalArgumentException if the data was formatted incorrectly
+ */
+ public void load(Properties props, String prefix, boolean treatAsCurrent) throws IllegalArgumentException {
+ _period = PersistenceHelper.getLong(props, prefix, ".period");
+ _creationDate = PersistenceHelper.getLong(props, prefix, ".creationDate");
+ _lastCoalesceDate = PersistenceHelper.getLong(props, prefix, ".lastCoalesceDate");
+ _currentTotalValue = PersistenceHelper.getDouble(props, prefix, ".currentTotalValue");
+ _currentEventCount = PersistenceHelper.getLong(props, prefix, ".currentEventCount");
+ _currentTotalEventTime = PersistenceHelper.getLong(props, prefix, ".currentTotalEventTime");
+ _lastTotalValue = PersistenceHelper.getDouble(props, prefix, ".lastTotalValue");
+ _lastEventCount = PersistenceHelper.getLong(props, prefix, ".lastEventCount");
+ _lastTotalEventTime = PersistenceHelper.getLong(props, prefix, ".lastTotalEventTime");
+ _extremeTotalValue = PersistenceHelper.getDouble(props, prefix, ".extremeTotalValue");
+ _extremeEventCount = PersistenceHelper.getLong(props, prefix, ".extremeEventCount");
+ _extremeTotalEventTime = PersistenceHelper.getLong(props, prefix, ".extremeTotalEventTime");
+ _lifetimeTotalValue = PersistenceHelper.getDouble(props, prefix, ".lifetimeTotalValue");
+ _lifetimeEventCount = PersistenceHelper.getLong(props, prefix, ".lifetimeEventCount");
+ _lifetimeTotalEventTime = PersistenceHelper.getLong(props, prefix, ".lifetimeTotalEventTime");
+
+ if (treatAsCurrent) _lastCoalesceDate = now();
+
+ if (_period <= 0) throw new IllegalArgumentException("Period for " + prefix + " is invalid");
+ coalesce();
+ }
+
+ public boolean equals(Object obj) {
+ if ((obj == null) || (obj.getClass() != Rate.class)) return false;
+ if (obj == this) return true;
+ Rate r = (Rate) obj;
+ return _period == r.getPeriod() && _creationDate == r.getCreationDate() &&
+ //_lastCoalesceDate == r.getLastCoalesceDate() &&
+ _currentTotalValue == r.getCurrentTotalValue() && _currentEventCount == r.getCurrentEventCount()
+ && _currentTotalEventTime == r.getCurrentTotalEventTime() && _lastTotalValue == r.getLastTotalValue()
+ && _lastEventCount == r.getLastEventCount() && _lastTotalEventTime == r.getLastTotalEventTime()
+ && _extremeTotalValue == r.getExtremeTotalValue() && _extremeEventCount == r.getExtremeEventCount()
+ && _extremeTotalEventTime == r.getExtremeTotalEventTime()
+ && _lifetimeTotalValue == r.getLifetimeTotalValue() && _lifetimeEventCount == r.getLifetimeEventCount()
+ && _lifetimeTotalEventTime == r.getLifetimeTotalEventTime();
+ }
+
+ public String toString() {
+ StringBuffer buf = new StringBuffer(2048);
+ buf.append("\n\t total value: ").append(getLastTotalValue());
+ buf.append("\n\t highest total value: ").append(getExtremeTotalValue());
+ buf.append("\n\t lifetime total value: ").append(getLifetimeTotalValue());
+ buf.append("\n\t # periods: ").append(getLifetimePeriods());
+ buf.append("\n\t average value: ").append(getAverageValue());
+ buf.append("\n\t highest average value: ").append(getExtremeAverageValue());
+ buf.append("\n\t lifetime average value: ").append(getLifetimeAverageValue());
+ buf.append("\n\t % of lifetime rate: ").append(100.0d * getPercentageOfLifetimeValue());
+ buf.append("\n\t % of highest rate: ").append(100.0d * getPercentageOfExtremeValue());
+ buf.append("\n\t # events: ").append(getLastEventCount());
+ buf.append("\n\t lifetime events: ").append(getLifetimeEventCount());
+ if (getLifetimeTotalEventTime() > 0) {
+ // we have some actual event durations
+ buf.append("\n\t % of time spent processing events: ").append(100.0d * getLastEventSaturation());
+ buf.append("\n\t total value if we were always processing events: ").append(getLastSaturationLimit());
+ buf.append("\n\t max % of time spent processing events: ").append(100.0d * getExtremeEventSaturation());
+ buf.append("\n\t max total value if we were always processing events: ")
+ .append(getExtremeSaturationLimit());
+ }
+ return buf.toString();
+ }
+
+ private final static long now() {
+ // "event time" is in the stat log (and uses Clock).
+ // we just want sequential and stable time here, so use the OS time, since it doesn't
+ // skew periodically
+ return System.currentTimeMillis(); //Clock.getInstance().now();
+ }
+
+ public static void main(String args[]) {
+ Rate rate = new Rate(1000);
+ for (int i = 0; i < 50; i++) {
+ try {
+ Thread.sleep(20);
+ } catch (InterruptedException ie) { // nop
+ }
+ rate.addData(i * 100, 20);
+ }
+ rate.coalesce();
+ StringBuffer buf = new StringBuffer(1024);
+ try {
+ rate.store("rate.test", buf);
+ byte data[] = buf.toString().getBytes();
+ _log.error("Stored rate: size = " + data.length + "\n" + buf.toString());
+
+ Properties props = new Properties();
+ props.load(new java.io.ByteArrayInputStream(data));
+
+ //_log.error("Properties loaded: \n" + props);
+
+ Rate r = new Rate(props, "rate.test", true);
+
+ _log.error("Comparison after store/load: " + r.equals(rate));
+ } catch (Throwable t) {
+ _log.error("b0rk", t);
+ }
+ try {
+ Thread.sleep(5000);
+ } catch (InterruptedException ie) { // nop
+ }
+ }
+}
diff --git a/src/net/i2p/stat/RateStat.java b/src/net/i2p/stat/RateStat.java
new file mode 100644
index 0000000..dc03072
--- /dev/null
+++ b/src/net/i2p/stat/RateStat.java
@@ -0,0 +1,207 @@
+package net.i2p.stat;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.Arrays;
+import java.util.Properties;
+
+import net.i2p.data.DataHelper;
+import net.i2p.util.Log;
+
+/** coordinate a moving rate over various periods */
+public class RateStat {
+ private final static Log _log = new Log(RateStat.class);
+ /** unique name of the statistic */
+ private String _statName;
+ /** grouping under which the stat is kept */
+ private String _groupName;
+ /** describe the stat */
+ private String _description;
+ /** actual rate objects for this statistic */
+ private Rate _rates[];
+ /** component we tell about events as they occur */
+ private StatLog _statLog;
+
+ public RateStat(String name, String description, String group, long periods[]) {
+ _statName = name;
+ _description = description;
+ _groupName = group;
+ _rates = new Rate[periods.length];
+ for (int i = 0; i < periods.length; i++) {
+ _rates[i] = new Rate(periods[i]);
+ _rates[i].setRateStat(this);
+ }
+ }
+ public void setStatLog(StatLog sl) { _statLog = sl; }
+
+ /**
+ * update all of the rates for the various periods with the given value.
+ */
+ public void addData(long value, long eventDuration) {
+ if (_statLog != null) _statLog.addData(_groupName, _statName, value, eventDuration);
+ for (int i = 0; i < _rates.length; i++)
+ _rates[i].addData(value, eventDuration);
+ }
+
+ /** coalesce all the stats */
+ public void coalesceStats() {
+ for (int i = 0; i < _rates.length; i++)
+ _rates[i].coalesce();
+ }
+
+ public String getName() {
+ return _statName;
+ }
+
+ public String getGroupName() {
+ return _groupName;
+ }
+
+ public String getDescription() {
+ return _description;
+ }
+
+ public long[] getPeriods() {
+ long rv[] = new long[_rates.length];
+ for (int i = 0; i < _rates.length; i++)
+ rv[i] = _rates[i].getPeriod();
+ return rv;
+ }
+
+ public double getLifetimeAverageValue() {
+ if ( (_rates == null) || (_rates.length <= 0) ) return 0;
+ return _rates[0].getLifetimeAverageValue();
+ }
+ public double getLifetimeEventCount() {
+ if ( (_rates == null) || (_rates.length <= 0) ) return 0;
+ return _rates[0].getLifetimeEventCount();
+ }
+
+ public Rate getRate(long period) {
+ for (int i = 0; i < _rates.length; i++) {
+ if (_rates[i].getPeriod() == period) return _rates[i];
+ }
+ return null;
+ }
+
+ public int hashCode() {
+ return _statName.hashCode();
+ }
+
+ private final static String NL = System.getProperty("line.separator");
+
+ public String toString() {
+ StringBuffer buf = new StringBuffer(4096);
+ buf.append(getGroupName()).append('.').append(getName()).append(": ").append(getDescription()).append('\n');
+ long periods[] = getPeriods();
+ Arrays.sort(periods);
+ for (int i = 0; i < periods.length; i++) {
+ buf.append('\t').append(periods[i]).append(':');
+ Rate curRate = getRate(periods[i]);
+ buf.append(curRate.toString());
+ buf.append(NL);
+ }
+ return buf.toString();
+ }
+
+ public boolean equals(Object obj) {
+ if ((obj == null) || (obj.getClass() != RateStat.class)) return false;
+ RateStat rs = (RateStat) obj;
+ if (DataHelper.eq(getGroupName(), rs.getGroupName()) && DataHelper.eq(getDescription(), rs.getDescription())
+ && DataHelper.eq(getName(), rs.getName())) {
+ for (int i = 0; i < _rates.length; i++)
+ if (!_rates[i].equals(rs.getRate(_rates[i].getPeriod()))) return false;
+ return true;
+ }
+
+ return false;
+ }
+
+ public void store(OutputStream out, String prefix) throws IOException {
+ StringBuffer buf = new StringBuffer(1024);
+ buf.append(NL);
+ buf.append("################################################################################").append(NL);
+ buf.append("# Rate: ").append(_groupName).append(": ").append(_statName).append(NL);
+ buf.append("# ").append(_description).append(NL);
+ buf.append("# ").append(NL).append(NL);
+ out.write(buf.toString().getBytes());
+ buf.setLength(0);
+ for (int i = 0; i < _rates.length; i++) {
+ buf.append("#######").append(NL);
+ buf.append("# Period : ").append(DataHelper.formatDuration(_rates[i].getPeriod())).append(" for rate ")
+ .append(_groupName).append(" - ").append(_statName).append(NL);
+ buf.append(NL);
+ out.write(buf.toString().getBytes());
+ String curPrefix = prefix + "." + DataHelper.formatDuration(_rates[i].getPeriod());
+ _rates[i].store(curPrefix, buf);
+ out.write(buf.toString().getBytes());
+ buf.setLength(0);
+ }
+ }
+
+ /**
+ * Load this rate stat from the properties, populating all of the rates contained
+ * underneath it. The comes from the given prefix (e.g. if we are given the prefix
+ * "profile.dbIntroduction", a series of rates may be found underneath
+ * "profile.dbIntroduction.60s", "profile.dbIntroduction.60m", and "profile.dbIntroduction.24h").
+ * This RateStat must already be created, with the specified rate entries constructued - this
+ * merely loads them with data.
+ *
+ * @param prefix prefix to the property entries (should NOT end with a period)
+ * @param treatAsCurrent if true, we'll treat the loaded data as if no time has
+ * elapsed since it was written out, but if it is false, we'll
+ * treat the data with as much freshness (or staleness) as appropriate.
+ * @throws IllegalArgumentException if the data was formatted incorrectly
+ */
+ public void load(Properties props, String prefix, boolean treatAsCurrent) throws IllegalArgumentException {
+ for (int i = 0; i < _rates.length; i++) {
+ long period = _rates[i].getPeriod();
+ String curPrefix = prefix + "." + DataHelper.formatDuration(period);
+ try {
+ _rates[i].load(props, curPrefix, treatAsCurrent);
+ } catch (IllegalArgumentException iae) {
+ _rates[i] = new Rate(period);
+ _rates[i].setRateStat(this);
+ if (_log.shouldLog(Log.WARN))
+ _log.warn("Rate for " + prefix + " is corrupt, reinitializing that period");
+ }
+ }
+ }
+
+ public static void main(String args[]) {
+ RateStat rs = new RateStat("moo", "moo moo moo", "cow trueisms", new long[] { 60 * 1000, 60 * 60 * 1000,
+ 24 * 60 * 60 * 1000});
+ for (int i = 0; i < 50; i++) {
+ try {
+ Thread.sleep(20);
+ } catch (InterruptedException ie) { // nop
+ }
+ rs.addData(i * 100, 20);
+ }
+ rs.coalesceStats();
+ java.io.ByteArrayOutputStream baos = new java.io.ByteArrayOutputStream(2048);
+ try {
+ rs.store(baos, "rateStat.test");
+ byte data[] = baos.toByteArray();
+ _log.error("Stored rateStat: size = " + data.length + "\n" + new String(data));
+
+ Properties props = new Properties();
+ props.load(new java.io.ByteArrayInputStream(data));
+
+ //_log.error("Properties loaded: \n" + props);
+
+ RateStat loadedRs = new RateStat("moo", "moo moo moo", "cow trueisms", new long[] { 60 * 1000,
+ 60 * 60 * 1000,
+ 24 * 60 * 60 * 1000});
+ loadedRs.load(props, "rateStat.test", true);
+
+ _log.error("Comparison after store/load: " + rs.equals(loadedRs));
+ } catch (Throwable t) {
+ _log.error("b0rk", t);
+ }
+ try {
+ Thread.sleep(5000);
+ } catch (InterruptedException ie) { // nop
+ }
+ }
+}
diff --git a/src/net/i2p/stat/RateSummaryListener.java b/src/net/i2p/stat/RateSummaryListener.java
new file mode 100644
index 0000000..449dce9
--- /dev/null
+++ b/src/net/i2p/stat/RateSummaryListener.java
@@ -0,0 +1,14 @@
+package net.i2p.stat;
+
+/**
+ * Receive the state of the rate when its coallesced
+ */
+public interface RateSummaryListener {
+ /**
+ * @param totalValue sum of all event values in the most recent period
+ * @param eventCount how many events occurred
+ * @param totalEventTime how long the events were running for
+ * @param period how long this period is
+ */
+ void add(double totalValue, long eventCount, double totalEventTime, long period);
+}
diff --git a/src/net/i2p/stat/SimpleStatDumper.java b/src/net/i2p/stat/SimpleStatDumper.java
new file mode 100644
index 0000000..2a7d4ae
--- /dev/null
+++ b/src/net/i2p/stat/SimpleStatDumper.java
@@ -0,0 +1,65 @@
+package net.i2p.stat;
+
+import java.util.Arrays;
+import java.util.Iterator;
+import java.util.Set;
+import java.util.TreeSet;
+
+import net.i2p.I2PAppContext;
+import net.i2p.util.Log;
+
+public class SimpleStatDumper {
+ private final static Log _log = new Log(SimpleStatDumper.class);
+
+ public static void dumpStats(I2PAppContext context, int logLevel) {
+ if (!_log.shouldLog(logLevel)) return;
+
+ StringBuffer buf = new StringBuffer(4 * 1024);
+ dumpFrequencies(context, buf);
+ dumpRates(context, buf);
+ _log.log(logLevel, buf.toString());
+ }
+
+ private static void dumpFrequencies(I2PAppContext ctx, StringBuffer buf) {
+ Set frequencies = new TreeSet(ctx.statManager().getFrequencyNames());
+ for (Iterator iter = frequencies.iterator(); iter.hasNext();) {
+ String name = (String) iter.next();
+ FrequencyStat freq = ctx.statManager().getFrequency(name);
+ buf.append('\n');
+ buf.append(freq.getGroupName()).append('.').append(freq.getName()).append(": ")
+ .append(freq.getDescription()).append('\n');
+ long periods[] = freq.getPeriods();
+ Arrays.sort(periods);
+ for (int i = 0; i < periods.length; i++) {
+ buf.append('\t').append(periods[i]).append(':');
+ Frequency curFreq = freq.getFrequency(periods[i]);
+ buf.append(" average interval: ").append(curFreq.getAverageInterval());
+ buf.append(" min average interval: ").append(curFreq.getMinAverageInterval());
+ buf.append('\n');
+ }
+ }
+ }
+
+ private static void dumpRates(I2PAppContext ctx, StringBuffer buf) {
+ Set rates = new TreeSet(ctx.statManager().getRateNames());
+ for (Iterator iter = rates.iterator(); iter.hasNext();) {
+ String name = (String) iter.next();
+ RateStat rate = ctx.statManager().getRate(name);
+ buf.append('\n');
+ buf.append(rate.getGroupName()).append('.').append(rate.getName()).append(": ")
+ .append(rate.getDescription()).append('\n');
+ long periods[] = rate.getPeriods();
+ Arrays.sort(periods);
+ for (int i = 0; i < periods.length; i++) {
+ buf.append('\t').append(periods[i]).append(':');
+ Rate curRate = rate.getRate(periods[i]);
+ dumpRate(curRate, buf);
+ buf.append('\n');
+ }
+ }
+ }
+
+ static void dumpRate(Rate curRate, StringBuffer buf) {
+ buf.append(curRate.toString());
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/stat/SizeTest.java b/src/net/i2p/stat/SizeTest.java
new file mode 100644
index 0000000..aaf157c
--- /dev/null
+++ b/src/net/i2p/stat/SizeTest.java
@@ -0,0 +1,58 @@
+package net.i2p.stat;
+
+public class SizeTest {
+ public static void main(String args[]) {
+ testRateSize(100); //117KB
+ testRateSize(100000); // 4.5MB
+ testRateSize(440000); // 44MB
+ //testFrequencySize(100); // 114KB
+ //testFrequencySize(100000); // 5.3MB
+ //testFrequencySize(1000000); // 52MB
+ }
+
+ private static void testRateSize(int num) {
+ Runtime.getRuntime().gc();
+ Rate rate[] = new Rate[num];
+ long used = Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory();
+ long usedPer = used / num;
+ System.out
+ .println(num + ": create array - Used: " + used + " bytes (or " + usedPer + " bytes per array entry)");
+
+ int i = 0;
+ try {
+ for (; i < num; i++)
+ rate[i] = new Rate(1234);
+ } catch (OutOfMemoryError oom) {
+ rate = null;
+ Runtime.getRuntime().gc();
+ System.out.println("Ran out of memory when creating rate " + i);
+ return;
+ }
+ Runtime.getRuntime().gc();
+ long usedObjects = Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory();
+ usedPer = usedObjects / num;
+ System.out.println(num + ": create objects - Used: " + usedObjects + " bytes (or " + usedPer
+ + " bytes per rate)");
+ rate = null;
+ Runtime.getRuntime().gc();
+ }
+
+ private static void testFrequencySize(int num) {
+ Runtime.getRuntime().gc();
+ Frequency freq[] = new Frequency[num];
+ long used = Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory();
+ long usedPer = used / num;
+ System.out
+ .println(num + ": create array - Used: " + used + " bytes (or " + usedPer + " bytes per array entry)");
+
+ for (int i = 0; i < num; i++)
+ freq[i] = new Frequency(1234);
+ Runtime.getRuntime().gc();
+ long usedObjects = Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory();
+ usedPer = usedObjects / num;
+ System.out.println(num + ": create objects - Used: " + usedObjects + " bytes (or " + usedPer
+ + " bytes per frequency)");
+ freq = null;
+ Runtime.getRuntime().gc();
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/stat/StatLog.java b/src/net/i2p/stat/StatLog.java
new file mode 100644
index 0000000..bc4ef81
--- /dev/null
+++ b/src/net/i2p/stat/StatLog.java
@@ -0,0 +1,8 @@
+package net.i2p.stat;
+
+/**
+ * Component to be notified when a particular event occurs
+ */
+public interface StatLog {
+ public void addData(String scope, String stat, long value, long duration);
+}
diff --git a/src/net/i2p/stat/StatLogSplitter.java b/src/net/i2p/stat/StatLogSplitter.java
new file mode 100644
index 0000000..ca9c357
--- /dev/null
+++ b/src/net/i2p/stat/StatLogSplitter.java
@@ -0,0 +1,76 @@
+package net.i2p.stat;
+
+import java.io.IOException;
+import java.io.BufferedReader;
+import java.io.FileReader;
+import java.io.FileWriter;
+import java.text.SimpleDateFormat;
+import java.text.ParseException;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.Map;
+
+
+/**
+ * Simple CLI to splot the stat logs into per-stat files containing
+ * #seconds since beginning and the value (ready for loading into your
+ * favorite plotting tool)
+ */
+public class StatLogSplitter {
+ private static final String DATE_FORMAT = "yyyyMMdd HH:mm:ss.SSS";
+ private static SimpleDateFormat _fmt = new SimpleDateFormat(DATE_FORMAT);
+ public static void main(String args[]) {
+ if (args.length != 1) {
+ System.err.println("Usage: StatLogSplitter filename");
+ return;
+ }
+ splitLog(args[0]);
+ }
+
+ private static void splitLog(String filename) {
+ Map outputFiles = new HashMap(4);
+ try {
+ BufferedReader in = new BufferedReader(new FileReader(filename));
+ String line;
+ long first = 0;
+ while ( (line = in.readLine()) != null) {
+ String date = line.substring(0, DATE_FORMAT.length()).trim();
+ int endGroup = line.indexOf(' ', DATE_FORMAT.length()+1);
+ int endStat = line.indexOf(' ', endGroup+1);
+ int endValue = line.indexOf(' ', endStat+1);
+ String group = line.substring(DATE_FORMAT.length()+1, endGroup).trim();
+ String stat = line.substring(endGroup, endStat).trim();
+ String value = line.substring(endStat, endValue).trim();
+ String duration = line.substring(endValue).trim();
+ //System.out.println(date + " " + group + " " + stat + " " + value + " " + duration);
+
+ try {
+ Date when = _fmt.parse(date);
+ if (first <= 0) first = when.getTime();
+ long val = Long.parseLong(value);
+ long time = Long.parseLong(duration);
+ if (!outputFiles.containsKey(stat)) {
+ outputFiles.put(stat, new FileWriter(stat + ".dat"));
+ System.out.println("Including data to " + stat + ".dat");
+ }
+ FileWriter out = (FileWriter)outputFiles.get(stat);
+ double s = (when.getTime()-first)/1000.0;
+ //long s = when.getTime();
+ out.write(s + " " + val + " [" + line + "]\n");
+ out.flush();
+ } catch (ParseException pe) {
+ continue;
+ } catch (NumberFormatException nfe){
+ continue;
+ }
+ }
+ } catch (IOException ioe) {
+ ioe.printStackTrace();
+ }
+ for (Iterator iter = outputFiles.values().iterator(); iter.hasNext(); ) {
+ FileWriter out = (FileWriter)iter.next();
+ try { out.close(); } catch (IOException ioe) {}
+ }
+ }
+}
diff --git a/src/net/i2p/stat/StatManager.java b/src/net/i2p/stat/StatManager.java
new file mode 100644
index 0000000..76b1c13
--- /dev/null
+++ b/src/net/i2p/stat/StatManager.java
@@ -0,0 +1,166 @@
+package net.i2p.stat;
+
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.TreeSet;
+
+import net.i2p.I2PAppContext;
+import net.i2p.util.Log;
+
+/**
+ * Coordinate the management of various frequencies and rates within I2P components,
+ * both allowing central update and retrieval, as well as distributed creation and
+ * use. This does not provide any persistence, but the data structures exposed can be
+ * read and updated to manage the complete state.
+ *
+ */
+public class StatManager {
+ private Log _log;
+ private I2PAppContext _context;
+
+ /** stat name to FrequencyStat */
+ private Map _frequencyStats;
+ /** stat name to RateStat */
+ private Map _rateStats;
+ private StatLog _statLog;
+
+ public static final String PROP_STAT_FILTER = "stat.logFilters";
+ public static final String PROP_STAT_FILE = "stat.logFile";
+ public static final String DEFAULT_STAT_FILE = "stats.log";
+
+ /**
+ * The stat manager should only be constructed and accessed through the
+ * application context. This constructor should only be used by the
+ * appropriate application context itself.
+ *
+ */
+ public StatManager(I2PAppContext context) {
+ _log = context.logManager().getLog(StatManager.class);
+ _context = context;
+ _frequencyStats = Collections.synchronizedMap(new HashMap(128));
+ _rateStats = Collections.synchronizedMap(new HashMap(128));
+ _statLog = new BufferedStatLog(context);
+ }
+
+ public StatLog getStatLog() { return _statLog; }
+ public void setStatLog(StatLog log) {
+ _statLog = log;
+ synchronized (_rateStats) {
+ for (Iterator iter = _rateStats.values().iterator(); iter.hasNext(); ) {
+ RateStat rs = (RateStat)iter.next();
+ rs.setStatLog(log);
+ }
+ }
+ }
+
+ /**
+ * Create a new statistic to monitor the frequency of some event.
+ *
+ * @param name unique name of the statistic
+ * @param description simple description of the statistic
+ * @param group used to group statistics together
+ * @param periods array of period lengths (in milliseconds)
+ */
+ public void createFrequencyStat(String name, String description, String group, long periods[]) {
+ if (_frequencyStats.containsKey(name)) return;
+ _frequencyStats.put(name, new FrequencyStat(name, description, group, periods));
+ }
+
+ /**
+ * Create a new statistic to monitor the average value and confidence of some action.
+ *
+ * @param name unique name of the statistic
+ * @param description simple description of the statistic
+ * @param group used to group statistics together
+ * @param periods array of period lengths (in milliseconds)
+ */
+ public void createRateStat(String name, String description, String group, long periods[]) {
+ if (_rateStats.containsKey(name)) return;
+ RateStat rs = new RateStat(name, description, group, periods);
+ if (_statLog != null) rs.setStatLog(_statLog);
+ _rateStats.put(name, rs);
+ }
+
+ /** update the given frequency statistic, taking note that an event occurred (and recalculating all frequencies) */
+ public void updateFrequency(String name) {
+ FrequencyStat freq = (FrequencyStat) _frequencyStats.get(name);
+ if (freq != null) freq.eventOccurred();
+ }
+
+ /** update the given rate statistic, taking note that the given data point was received (and recalculating all rates) */
+ public void addRateData(String name, long data, long eventDuration) {
+ RateStat stat = (RateStat) _rateStats.get(name);
+ if (stat != null) stat.addData(data, eventDuration);
+ }
+
+ public void coalesceStats() {
+ synchronized (_frequencyStats) {
+ for (Iterator iter = _frequencyStats.values().iterator(); iter.hasNext();) {
+ FrequencyStat stat = (FrequencyStat)iter.next();
+ if (stat != null) {
+ stat.coalesceStats();
+ }
+ }
+ }
+ synchronized (_rateStats) {
+ for (Iterator iter = _rateStats.values().iterator(); iter.hasNext();) {
+ RateStat stat = (RateStat)iter.next();
+ if (stat != null) {
+ stat.coalesceStats();
+ }
+ }
+ }
+ }
+
+ public FrequencyStat getFrequency(String name) {
+ return (FrequencyStat) _frequencyStats.get(name);
+ }
+
+ public RateStat getRate(String name) {
+ return (RateStat) _rateStats.get(name);
+ }
+
+ public Set getFrequencyNames() {
+ return new HashSet(_frequencyStats.keySet());
+ }
+
+ public Set getRateNames() {
+ return new HashSet(_rateStats.keySet());
+ }
+
+ /** is the given stat a monitored rate? */
+ public boolean isRate(String statName) {
+ return _rateStats.containsKey(statName);
+ }
+
+ /** is the given stat a monitored frequency? */
+ public boolean isFrequency(String statName) {
+ return _frequencyStats.containsKey(statName);
+ }
+
+ /** Group name (String) to a Set of stat names, ordered alphabetically */
+ public Map getStatsByGroup() {
+ Map groups = new TreeMap();
+ for (Iterator iter = _frequencyStats.values().iterator(); iter.hasNext();) {
+ FrequencyStat stat = (FrequencyStat) iter.next();
+ if (!groups.containsKey(stat.getGroupName())) groups.put(stat.getGroupName(), new TreeSet());
+ Set names = (Set) groups.get(stat.getGroupName());
+ names.add(stat.getName());
+ }
+ for (Iterator iter = _rateStats.values().iterator(); iter.hasNext();) {
+ RateStat stat = (RateStat) iter.next();
+ if (!groups.containsKey(stat.getGroupName())) groups.put(stat.getGroupName(), new TreeSet());
+ Set names = (Set) groups.get(stat.getGroupName());
+ names.add(stat.getName());
+ }
+ return groups;
+ }
+
+ public String getStatFilter() { return _context.getProperty(PROP_STAT_FILTER); }
+ public String getStatFile() { return _context.getProperty(PROP_STAT_FILE, DEFAULT_STAT_FILE); }
+}
diff --git a/src/net/i2p/time/NtpClient.java b/src/net/i2p/time/NtpClient.java
new file mode 100644
index 0000000..9dada53
--- /dev/null
+++ b/src/net/i2p/time/NtpClient.java
@@ -0,0 +1,165 @@
+package net.i2p.time;
+/*
+ * Copyright (c) 2004, Adam Buckley
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * - Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * - Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * - Neither the name of Adam Buckley nor the names of its contributors may be
+ * used to endorse or promote products derived from this software without
+ * specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+import java.io.IOException;
+import java.io.InterruptedIOException;
+import java.net.DatagramPacket;
+import java.net.DatagramSocket;
+import java.net.InetAddress;
+import java.util.ArrayList;
+import java.util.Collections;
+
+
+/**
+ * NtpClient - an NTP client for Java. This program connects to an NTP server
+ * and prints the response to the console.
+ *
+ * The local clock offset calculation is implemented according to the SNTP
+ * algorithm specified in RFC 2030.
+ *
+ * Note that on windows platforms, the curent time-of-day timestamp is limited
+ * to an resolution of 10ms and adversely affects the accuracy of the results.
+ *
+ * @author Adam Buckley
+ * (minor refactoring by jrandom)
+ */
+public class NtpClient {
+ /** difference between the unix epoch and jan 1 1900 (NTP uses that) */
+ private final static double SECONDS_1900_TO_EPOCH = 2208988800.0;
+ private final static int NTP_PORT = 123;
+
+ /**
+ * Query the ntp servers, returning the current time from first one we find
+ *
+ * @return milliseconds since january 1, 1970 (UTC)
+ * @throws IllegalArgumentException if none of the servers are reachable
+ */
+ public static long currentTime(String serverNames[]) {
+ if (serverNames == null)
+ throw new IllegalArgumentException("No NTP servers specified");
+ ArrayList names = new ArrayList(serverNames.length);
+ for (int i = 0; i < serverNames.length; i++)
+ names.add(serverNames[i]);
+ Collections.shuffle(names);
+ for (int i = 0; i < names.size(); i++) {
+ long now = currentTime((String)names.get(i));
+ if (now > 0)
+ return now;
+ }
+ throw new IllegalArgumentException("No reachable NTP servers specified");
+ }
+
+ /**
+ * Query the given NTP server, returning the current internet time
+ *
+ * @return milliseconds since january 1, 1970 (UTC), or -1 on error
+ */
+ public static long currentTime(String serverName) {
+ try {
+ // Send request
+ DatagramSocket socket = new DatagramSocket();
+ InetAddress address = InetAddress.getByName(serverName);
+ byte[] buf = new NtpMessage().toByteArray();
+ DatagramPacket packet = new DatagramPacket(buf, buf.length, address, NTP_PORT);
+
+ // Set the transmit timestamp *just* before sending the packet
+ // ToDo: Does this actually improve performance or not?
+ NtpMessage.encodeTimestamp(packet.getData(), 40,
+ (System.currentTimeMillis()/1000.0)
+ + SECONDS_1900_TO_EPOCH);
+
+ socket.send(packet);
+
+ // Get response
+ packet = new DatagramPacket(buf, buf.length);
+ socket.setSoTimeout(10*1000);
+ try {
+ socket.receive(packet);
+ } catch (InterruptedIOException iie) {
+ socket.close();
+ return -1;
+ }
+
+ // Immediately record the incoming timestamp
+ double destinationTimestamp = (System.currentTimeMillis()/1000.0) + SECONDS_1900_TO_EPOCH;
+
+ // Process response
+ NtpMessage msg = new NtpMessage(packet.getData());
+ double roundTripDelay = (destinationTimestamp-msg.originateTimestamp) -
+ (msg.receiveTimestamp-msg.transmitTimestamp);
+ double localClockOffset = ((msg.receiveTimestamp - msg.originateTimestamp) +
+ (msg.transmitTimestamp - destinationTimestamp)) / 2;
+ socket.close();
+
+ long rv = (long)(System.currentTimeMillis() + localClockOffset*1000);
+ //System.out.println("host: " + address.getHostAddress() + " rtt: " + roundTripDelay + " offset: " + localClockOffset + " seconds");
+ return rv;
+ } catch (IOException ioe) {
+ //ioe.printStackTrace();
+ return -1;
+ }
+ }
+
+ public static void main(String[] args) throws IOException {
+ // Process command-line args
+ if(args.length <= 0) {
+ printUsage();
+ return;
+ // args = new String[] { "ntp1.sth.netnod.se", "ntp2.sth.netnod.se" };
+ }
+
+ long now = currentTime(args);
+ System.out.println("Current time: " + new java.util.Date(now));
+ }
+
+
+
+ /**
+ * Prints usage
+ */
+ static void printUsage() {
+ System.out.println(
+ "NtpClient - an NTP client for Java.\n" +
+ "\n" +
+ "This program connects to an NTP server and prints the current time to the console.\n" +
+ "\n" +
+ "\n" +
+ "Usage: java NtpClient server[ server]*\n" +
+ "\n" +
+ "\n" +
+ "This program is copyright (c) Adam Buckley 2004 and distributed under the terms\n" +
+ "of the GNU General Public License. This program is distributed in the hope\n" +
+ "that it will be useful, but WITHOUT ANY WARRANTY; without even the implied\n" +
+ "warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU\n" +
+ "General Public License available at http://www.gnu.org/licenses/gpl.html for\n" +
+ "more details.");
+
+ }
+}
diff --git a/src/net/i2p/time/NtpMessage.java b/src/net/i2p/time/NtpMessage.java
new file mode 100644
index 0000000..f7626b5
--- /dev/null
+++ b/src/net/i2p/time/NtpMessage.java
@@ -0,0 +1,465 @@
+package net.i2p.time;
+/*
+ * Copyright (c) 2004, Adam Buckley
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * - Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * - Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * - Neither the name of Adam Buckley nor the names of its contributors may be
+ * used to endorse or promote products derived from this software without
+ * specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+import java.text.DecimalFormat;
+import java.text.SimpleDateFormat;
+import java.util.Date;
+
+
+/**
+ * This class represents a NTP message, as specified in RFC 2030. The message
+ * format is compatible with all versions of NTP and SNTP.
+ *
+ * This class does not support the optional authentication protocol, and
+ * ignores the key ID and message digest fields.
+ *
+ * For convenience, this class exposes message values as native Java types, not
+ * the NTP-specified data formats. For example, timestamps are
+ * stored as doubles (as opposed to the NTP unsigned 64-bit fixed point
+ * format).
+ *
+ * However, the contructor NtpMessage(byte[]) and the method toByteArray()
+ * allow the import and export of the raw NTP message format.
+ *
+ *
+ * Usage example
+ *
+ * // Send message
+ * DatagramSocket socket = new DatagramSocket();
+ * InetAddress address = InetAddress.getByName("ntp.cais.rnp.br");
+ * byte[] buf = new NtpMessage().toByteArray();
+ * DatagramPacket packet = new DatagramPacket(buf, buf.length, address, 123);
+ * socket.send(packet);
+ *
+ * // Get response
+ * socket.receive(packet);
+ * System.out.println(msg.toString());
+ *
+ * Comments for member variables are taken from RFC2030 by David Mills,
+ * University of Delaware.
+ *
+ * Number format conversion code in NtpMessage(byte[] array) and toByteArray()
+ * inspired by http://www.pps.jussieu.fr/~jch/enseignement/reseaux/
+ * NTPMessage.java which is copyright (c) 2003 by Juliusz Chroboczek
+ *
+ * @author Adam Buckley
+ */
+public class NtpMessage {
+ /**
+ * This is a two-bit code warning of an impending leap second to be
+ * inserted/deleted in the last minute of the current day. It's values
+ * may be as follows:
+ *
+ * Value Meaning
+ * ----- -------
+ * 0 no warning
+ * 1 last minute has 61 seconds
+ * 2 last minute has 59 seconds)
+ * 3 alarm condition (clock not synchronized)
+ */
+ public byte leapIndicator = 0;
+
+
+ /**
+ * This value indicates the NTP/SNTP version number. The version number
+ * is 3 for Version 3 (IPv4 only) and 4 for Version 4 (IPv4, IPv6 and OSI).
+ * If necessary to distinguish between IPv4, IPv6 and OSI, the
+ * encapsulating context must be inspected.
+ */
+ public byte version = 3;
+
+
+ /**
+ * This value indicates the mode, with values defined as follows:
+ *
+ * Mode Meaning
+ * ---- -------
+ * 0 reserved
+ * 1 symmetric active
+ * 2 symmetric passive
+ * 3 client
+ * 4 server
+ * 5 broadcast
+ * 6 reserved for NTP control message
+ * 7 reserved for private use
+ *
+ * In unicast and anycast modes, the client sets this field to 3 (client)
+ * in the request and the server sets it to 4 (server) in the reply. In
+ * multicast mode, the server sets this field to 5 (broadcast).
+ */
+ public byte mode = 0;
+
+
+ /**
+ * This value indicates the stratum level of the local clock, with values
+ * defined as follows:
+ *
+ * Stratum Meaning
+ * ----------------------------------------------
+ * 0 unspecified or unavailable
+ * 1 primary reference (e.g., radio clock)
+ * 2-15 secondary reference (via NTP or SNTP)
+ * 16-255 reserved
+ */
+ public short stratum = 0;
+
+
+ /**
+ * This value indicates the maximum interval between successive messages,
+ * in seconds to the nearest power of two. The values that can appear in
+ * this field presently range from 4 (16 s) to 14 (16284 s); however, most
+ * applications use only the sub-range 6 (64 s) to 10 (1024 s).
+ */
+ public byte pollInterval = 0;
+
+
+ /**
+ * This value indicates the precision of the local clock, in seconds to
+ * the nearest power of two. The values that normally appear in this field
+ * range from -6 for mains-frequency clocks to -20 for microsecond clocks
+ * found in some workstations.
+ */
+ public byte precision = 0;
+
+
+ /**
+ * This value indicates the total roundtrip delay to the primary reference
+ * source, in seconds. Note that this variable can take on both positive
+ * and negative values, depending on the relative time and frequency
+ * offsets. The values that normally appear in this field range from
+ * negative values of a few milliseconds to positive values of several
+ * hundred milliseconds.
+ */
+ public double rootDelay = 0;
+
+
+ /**
+ * This value indicates the nominal error relative to the primary reference
+ * source, in seconds. The values that normally appear in this field
+ * range from 0 to several hundred milliseconds.
+ */
+ public double rootDispersion = 0;
+
+
+ /**
+ * This is a 4-byte array identifying the particular reference source.
+ * In the case of NTP Version 3 or Version 4 stratum-0 (unspecified) or
+ * stratum-1 (primary) servers, this is a four-character ASCII string, left
+ * justified and zero padded to 32 bits. In NTP Version 3 secondary
+ * servers, this is the 32-bit IPv4 address of the reference source. In NTP
+ * Version 4 secondary servers, this is the low order 32 bits of the latest
+ * transmit timestamp of the reference source. NTP primary (stratum 1)
+ * servers should set this field to a code identifying the external
+ * reference source according to the following list. If the external
+ * reference is one of those listed, the associated code should be used.
+ * Codes for sources not listed can be contrived as appropriate.
+ *
+ * Code External Reference Source
+ * ---- -------------------------
+ * LOCL uncalibrated local clock used as a primary reference for
+ * a subnet without external means of synchronization
+ * PPS atomic clock or other pulse-per-second source
+ * individually calibrated to national standards
+ * ACTS NIST dialup modem service
+ * USNO USNO modem service
+ * PTB PTB (Germany) modem service
+ * TDF Allouis (France) Radio 164 kHz
+ * DCF Mainflingen (Germany) Radio 77.5 kHz
+ * MSF Rugby (UK) Radio 60 kHz
+ * WWV Ft. Collins (US) Radio 2.5, 5, 10, 15, 20 MHz
+ * WWVB Boulder (US) Radio 60 kHz
+ * WWVH Kaui Hawaii (US) Radio 2.5, 5, 10, 15 MHz
+ * CHU Ottawa (Canada) Radio 3330, 7335, 14670 kHz
+ * LORC LORAN-C radionavigation system
+ * OMEG OMEGA radionavigation system
+ * GPS Global Positioning Service
+ * GOES Geostationary Orbit Environment Satellite
+ */
+ public byte[] referenceIdentifier = {0, 0, 0, 0};
+
+
+ /**
+ * This is the time at which the local clock was last set or corrected, in
+ * seconds since 00:00 1-Jan-1900.
+ */
+ public double referenceTimestamp = 0;
+
+
+ /**
+ * This is the time at which the request departed the client for the
+ * server, in seconds since 00:00 1-Jan-1900.
+ */
+ public double originateTimestamp = 0;
+
+
+ /**
+ * This is the time at which the request arrived at the server, in seconds
+ * since 00:00 1-Jan-1900.
+ */
+ public double receiveTimestamp = 0;
+
+
+ /**
+ * This is the time at which the reply departed the server for the client,
+ * in seconds since 00:00 1-Jan-1900.
+ */
+ public double transmitTimestamp = 0;
+
+
+
+ /**
+ * Constructs a new NtpMessage from an array of bytes.
+ */
+ public NtpMessage(byte[] array) {
+ // See the packet format diagram in RFC 2030 for details
+ leapIndicator = (byte) ((array[0] >> 6) & 0x3);
+ version = (byte) ((array[0] >> 3) & 0x7);
+ mode = (byte) (array[0] & 0x7);
+ stratum = unsignedByteToShort(array[1]);
+ pollInterval = array[2];
+ precision = array[3];
+
+ rootDelay = (array[4] * 256.0) +
+ unsignedByteToShort(array[5]) +
+ (unsignedByteToShort(array[6]) / 256.0) +
+ (unsignedByteToShort(array[7]) / 65536.0);
+
+ rootDispersion = (unsignedByteToShort(array[8]) * 256.0) +
+ unsignedByteToShort(array[9]) +
+ (unsignedByteToShort(array[10]) / 256.0) +
+ (unsignedByteToShort(array[11]) / 65536.0);
+
+ referenceIdentifier[0] = array[12];
+ referenceIdentifier[1] = array[13];
+ referenceIdentifier[2] = array[14];
+ referenceIdentifier[3] = array[15];
+
+ referenceTimestamp = decodeTimestamp(array, 16);
+ originateTimestamp = decodeTimestamp(array, 24);
+ receiveTimestamp = decodeTimestamp(array, 32);
+ transmitTimestamp = decodeTimestamp(array, 40);
+ }
+
+
+
+ /**
+ * Constructs a new NtpMessage in client -> server mode, and sets the
+ * transmit timestamp to the current time.
+ */
+ public NtpMessage() {
+ // Note that all the other member variables are already set with
+ // appropriate default values.
+ this.mode = 3;
+ this.transmitTimestamp = (System.currentTimeMillis()/1000.0) + 2208988800.0;
+ }
+
+
+
+ /**
+ * This method constructs the data bytes of a raw NTP packet.
+ */
+ public byte[] toByteArray() {
+ // All bytes are automatically set to 0
+ byte[] p = new byte[48];
+
+ p[0] = (byte) (leapIndicator << 6 | version << 3 | mode);
+ p[1] = (byte) stratum;
+ p[2] = (byte) pollInterval;
+ p[3] = (byte) precision;
+
+ // root delay is a signed 16.16-bit FP, in Java an int is 32-bits
+ int l = (int) (rootDelay * 65536.0);
+ p[4] = (byte) ((l >> 24) & 0xFF);
+ p[5] = (byte) ((l >> 16) & 0xFF);
+ p[6] = (byte) ((l >> 8) & 0xFF);
+ p[7] = (byte) (l & 0xFF);
+
+ // root dispersion is an unsigned 16.16-bit FP, in Java there are no
+ // unsigned primitive types, so we use a long which is 64-bits
+ long ul = (long) (rootDispersion * 65536.0);
+ p[8] = (byte) ((ul >> 24) & 0xFF);
+ p[9] = (byte) ((ul >> 16) & 0xFF);
+ p[10] = (byte) ((ul >> 8) & 0xFF);
+ p[11] = (byte) (ul & 0xFF);
+
+ p[12] = referenceIdentifier[0];
+ p[13] = referenceIdentifier[1];
+ p[14] = referenceIdentifier[2];
+ p[15] = referenceIdentifier[3];
+
+ encodeTimestamp(p, 16, referenceTimestamp);
+ encodeTimestamp(p, 24, originateTimestamp);
+ encodeTimestamp(p, 32, receiveTimestamp);
+ encodeTimestamp(p, 40, transmitTimestamp);
+
+ return p;
+ }
+
+
+
+ /**
+ * Returns a string representation of a NtpMessage
+ */
+ public String toString() {
+ String precisionStr = new DecimalFormat("0.#E0").format(Math.pow(2, precision));
+
+ return "Leap indicator: " + leapIndicator + "\n" +
+ "Version: " + version + "\n" +
+ "Mode: " + mode + "\n" +
+ "Stratum: " + stratum + "\n" +
+ "Poll: " + pollInterval + "\n" +
+ "Precision: " + precision + " (" + precisionStr + " seconds)\n" +
+ "Root delay: " + new DecimalFormat("0.00").format(rootDelay*1000) + " ms\n" +
+ "Root dispersion: " + new DecimalFormat("0.00").format(rootDispersion*1000) + " ms\n" +
+ "Reference identifier: " + referenceIdentifierToString(referenceIdentifier, stratum, version) + "\n" +
+ "Reference timestamp: " + timestampToString(referenceTimestamp) + "\n" +
+ "Originate timestamp: " + timestampToString(originateTimestamp) + "\n" +
+ "Receive timestamp: " + timestampToString(receiveTimestamp) + "\n" +
+ "Transmit timestamp: " + timestampToString(transmitTimestamp);
+ }
+
+
+
+ /**
+ * Converts an unsigned byte to a short. By default, Java assumes that
+ * a byte is signed.
+ */
+ public static short unsignedByteToShort(byte b) {
+ if((b & 0x80)==0x80)
+ return (short) (128 + (b & 0x7f));
+ else
+ return (short) b;
+ }
+
+
+
+ /**
+ * Will read 8 bytes of a message beginning at pointer
+ * and return it as a double, according to the NTP 64-bit timestamp
+ * format.
+ */
+ public static double decodeTimestamp(byte[] array, int pointer) {
+ double r = 0.0;
+
+ for(int i=0; i<8; i++) {
+ r += unsignedByteToShort(array[pointer+i]) * Math.pow(2, (3-i)*8);
+ }
+
+ return r;
+ }
+
+
+
+ /**
+ * Encodes a timestamp in the specified position in the message
+ */
+ public static void encodeTimestamp(byte[] array, int pointer, double timestamp) {
+ // Converts a double into a 64-bit fixed point
+ for(int i=0; i<8; i++) {
+ // 2^24, 2^16, 2^8, .. 2^-32
+ double base = Math.pow(2, (3-i)*8);
+
+ // Capture byte value
+ array[pointer+i] = (byte) (timestamp / base);
+
+ // Subtract captured value from remaining total
+ timestamp = timestamp - (double) (unsignedByteToShort(array[pointer+i]) * base);
+ }
+
+ // From RFC 2030: It is advisable to fill the non-significant
+ // low order bits of the timestamp with a random, unbiased
+ // bitstring, both to avoid systematic roundoff errors and as
+ // a means of loop detection and replay detection.
+ array[7+pointer] = (byte) (Math.random()*255.0);
+ }
+
+
+
+ /**
+ * Returns a timestamp (number of seconds since 00:00 1-Jan-1900) as a
+ * formatted date/time string.
+ */
+ public static String timestampToString(double timestamp) {
+ if(timestamp==0) return "0";
+
+ // timestamp is relative to 1900, utc is used by Java and is relative
+ // to 1970
+ double utc = timestamp - (2208988800.0);
+
+ // milliseconds
+ long ms = (long) (utc * 1000.0);
+
+ // date/time
+ String date = new SimpleDateFormat("dd-MMM-yyyy HH:mm:ss").format(new Date(ms));
+
+ // fraction
+ double fraction = timestamp - ((long) timestamp);
+ String fractionSting = new DecimalFormat(".000000").format(fraction);
+
+ return date + fractionSting;
+ }
+
+
+
+ /**
+ * Returns a string representation of a reference identifier according
+ * to the rules set out in RFC 2030.
+ */
+ public static String referenceIdentifierToString(byte[] ref, short stratum, byte version) {
+ // From the RFC 2030:
+ // In the case of NTP Version 3 or Version 4 stratum-0 (unspecified)
+ // or stratum-1 (primary) servers, this is a four-character ASCII
+ // string, left justified and zero padded to 32 bits.
+ if(stratum==0 || stratum==1) {
+ return new String(ref);
+ }
+
+ // In NTP Version 3 secondary servers, this is the 32-bit IPv4
+ // address of the reference source.
+ else if(version==3) {
+ return unsignedByteToShort(ref[0]) + "." +
+ unsignedByteToShort(ref[1]) + "." +
+ unsignedByteToShort(ref[2]) + "." +
+ unsignedByteToShort(ref[3]);
+ }
+
+ // In NTP Version 4 secondary servers, this is the low order 32 bits
+ // of the latest transmit timestamp of the reference source.
+ else if(version==4) {
+ return "" + ((unsignedByteToShort(ref[0]) / 256.0) +
+ (unsignedByteToShort(ref[1]) / 65536.0) +
+ (unsignedByteToShort(ref[2]) / 16777216.0) +
+ (unsignedByteToShort(ref[3]) / 4294967296.0));
+ }
+
+ return "";
+ }
+}
diff --git a/src/net/i2p/time/Timestamper.java b/src/net/i2p/time/Timestamper.java
new file mode 100644
index 0000000..5a23e90
--- /dev/null
+++ b/src/net/i2p/time/Timestamper.java
@@ -0,0 +1,300 @@
+package net.i2p.time;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.StringTokenizer;
+
+import net.i2p.I2PAppContext;
+import net.i2p.util.I2PThread;
+import net.i2p.util.Log;
+
+/**
+ * Periodically query a series of NTP servers and update any associated
+ * listeners. It tries the NTP servers in order, contacting them using
+ * SNTP (UDP port 123). By default, it does this every 5 minutes,
+ * forever.
+ */
+public class Timestamper implements Runnable {
+ private I2PAppContext _context;
+ private Log _log;
+ private List _servers;
+ private List _listeners;
+ private int _queryFrequency;
+ private int _concurringServers;
+ private volatile boolean _disabled;
+ private boolean _daemon;
+ private boolean _initialized;
+
+ private static final int DEFAULT_QUERY_FREQUENCY = 5*60*1000;
+ private static final String DEFAULT_SERVER_LIST = "pool.ntp.org, pool.ntp.org, pool.ntp.org";
+ private static final boolean DEFAULT_DISABLED = true;
+ /** how many times do we have to query if we are changing the clock? */
+ private static final int DEFAULT_CONCURRING_SERVERS = 3;
+
+ public static final String PROP_QUERY_FREQUENCY = "time.queryFrequencyMs";
+ public static final String PROP_SERVER_LIST = "time.sntpServerList";
+ public static final String PROP_DISABLED = "time.disabled";
+ public static final String PROP_CONCURRING_SERVERS = "time.concurringServers";
+
+ /** if different SNTP servers differ by more than 10s, someone is b0rked */
+ private static final int MAX_VARIANCE = 10*1000;
+
+ public Timestamper(I2PAppContext ctx) {
+ this(ctx, null, true);
+ }
+
+ public Timestamper(I2PAppContext ctx, UpdateListener lsnr) {
+ this(ctx, lsnr, true);
+ }
+ public Timestamper(I2PAppContext ctx, UpdateListener lsnr, boolean daemon) {
+ _context = ctx;
+ _daemon = daemon;
+ _initialized = false;
+ _servers = new ArrayList(1);
+ _listeners = new ArrayList(1);
+ if (lsnr != null)
+ _listeners.add(lsnr);
+ updateConfig();
+ startTimestamper();
+ }
+
+ public int getServerCount() {
+ synchronized (_servers) {
+ return _servers.size();
+ }
+ }
+ public String getServer(int index) {
+ synchronized (_servers) {
+ return (String)_servers.get(index);
+ }
+ }
+
+ public int getQueryFrequencyMs() { return _queryFrequency; }
+
+ public boolean getIsDisabled() { return _disabled; }
+
+ public void addListener(UpdateListener lsnr) {
+ synchronized (_listeners) {
+ _listeners.add(lsnr);
+ }
+ }
+ public void removeListener(UpdateListener lsnr) {
+ synchronized (_listeners) {
+ _listeners.remove(lsnr);
+ }
+ }
+ public int getListenerCount() {
+ synchronized (_listeners) {
+ return _listeners.size();
+ }
+ }
+ public UpdateListener getListener(int index) {
+ synchronized (_listeners) {
+ return (UpdateListener)_listeners.get(index);
+ }
+ }
+
+ private void startTimestamper() {
+ I2PThread t = new I2PThread(this, "Timestamper");
+ t.setPriority(I2PThread.MIN_PRIORITY);
+ t.setDaemon(_daemon);
+ t.start();
+ }
+
+ public void waitForInitialization() {
+ try {
+ synchronized (this) {
+ if (!_initialized)
+ wait();
+ }
+ } catch (InterruptedException ie) {}
+ }
+
+ public void run() {
+ try { Thread.sleep(1000); } catch (InterruptedException ie) {}
+ _log = _context.logManager().getLog(Timestamper.class);
+ if (_log.shouldLog(Log.INFO))
+ _log.info("Starting timestamper");
+
+ if (_log.shouldLog(Log.INFO))
+ _log.info("Starting up timestamper");
+ boolean lastFailed = false;
+ try {
+ while (true) {
+ updateConfig();
+ if (!_disabled) {
+ String serverList[] = null;
+ synchronized (_servers) {
+ serverList = new String[_servers.size()];
+ for (int i = 0; i < serverList.length; i++)
+ serverList[i] = (String)_servers.get(i);
+ }
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Querying servers " + _servers);
+ try {
+ lastFailed = !queryTime(serverList);
+ } catch (IllegalArgumentException iae) {
+ if ( (!lastFailed) && (_log.shouldLog(Log.ERROR)) )
+ _log.error("Unable to reach any of the NTP servers - network disconnected?");
+ lastFailed = true;
+ }
+ }
+
+ _initialized = true;
+ synchronized (this) { notifyAll(); }
+ long sleepTime = _context.random().nextInt(_queryFrequency) + _queryFrequency;
+ if (lastFailed)
+ sleepTime = 30*1000;
+ try { Thread.sleep(sleepTime); } catch (InterruptedException ie) {}
+ }
+ } catch (Throwable t) {
+ _log.log(Log.CRIT, "Timestamper died!", t);
+ synchronized (this) { notifyAll(); }
+ }
+ }
+
+ /**
+ * True if the time was queried successfully, false if it couldn't be
+ */
+ private boolean queryTime(String serverList[]) throws IllegalArgumentException {
+ long found[] = new long[_concurringServers];
+ long now = -1;
+ long expectedDelta = 0;
+ for (int i = 0; i < _concurringServers; i++) {
+ try { Thread.sleep(10*1000); } catch (InterruptedException ie) {}
+ now = NtpClient.currentTime(serverList);
+ long delta = now - _context.clock().now();
+ found[i] = delta;
+ if (i == 0) {
+ if (Math.abs(delta) < MAX_VARIANCE) {
+ if (_log.shouldLog(Log.INFO))
+ _log.info("a single SNTP query was within the tolerance (" + delta + "ms)");
+ break;
+ } else {
+ // outside the tolerance, lets iterate across the concurring queries
+ expectedDelta = delta;
+ }
+ } else {
+ if (Math.abs(delta - expectedDelta) > MAX_VARIANCE) {
+ if (_log.shouldLog(Log.ERROR)) {
+ StringBuffer err = new StringBuffer(96);
+ err.append("SNTP client variance exceeded at query ").append(i);
+ err.append(". expected = ");
+ err.append(expectedDelta);
+ err.append(", found = ");
+ err.append(delta);
+ err.append(" all deltas: ");
+ for (int j = 0; j < found.length; j++)
+ err.append(found[j]).append(' ');
+ _log.error(err.toString());
+ }
+ return false;
+ }
+ }
+ }
+ stampTime(now);
+ if (_log.shouldLog(Log.DEBUG)) {
+ StringBuffer buf = new StringBuffer(64);
+ buf.append("Deltas: ");
+ for (int i = 0; i < found.length; i++)
+ buf.append(found[i]).append(' ');
+ _log.debug(buf.toString());
+ }
+ return true;
+ }
+
+ /**
+ * Send an HTTP request to a given URL specifying the current time
+ */
+ private void stampTime(long now) {
+ long before = _context.clock().now();
+ synchronized (_listeners) {
+ for (int i = 0; i < _listeners.size(); i++) {
+ UpdateListener lsnr = (UpdateListener)_listeners.get(i);
+ lsnr.setNow(now);
+ }
+ }
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Stamped the time as " + now + " (delta=" + (now-before) + ")");
+ }
+
+ /**
+ * Reload all the config elements from the appContext
+ *
+ */
+ private void updateConfig() {
+ String serverList = _context.getProperty(PROP_SERVER_LIST);
+ if ( (serverList == null) || (serverList.trim().length() <= 0) )
+ serverList = DEFAULT_SERVER_LIST;
+ synchronized (_servers) {
+ _servers.clear();
+ StringTokenizer tok = new StringTokenizer(serverList, ",");
+ while (tok.hasMoreTokens()) {
+ String val = (String)tok.nextToken();
+ val = val.trim();
+ if (val.length() > 0)
+ _servers.add(val);
+ }
+ }
+
+ String freq = _context.getProperty(PROP_QUERY_FREQUENCY);
+ if ( (freq == null) || (freq.trim().length() <= 0) )
+ freq = DEFAULT_QUERY_FREQUENCY + "";
+ try {
+ int ms = Integer.parseInt(freq);
+ if (ms > 60*1000) {
+ _queryFrequency = ms;
+ } else {
+ if ( (_log != null) && (_log.shouldLog(Log.ERROR)) )
+ _log.error("Query frequency once every " + ms + "ms is too fast!");
+ _queryFrequency = DEFAULT_QUERY_FREQUENCY;
+ }
+ } catch (NumberFormatException nfe) {
+ if ( (_log != null) && (_log.shouldLog(Log.WARN)) )
+ _log.warn("Invalid query frequency [" + freq + "], falling back on " + DEFAULT_QUERY_FREQUENCY);
+ _queryFrequency = DEFAULT_QUERY_FREQUENCY;
+ }
+
+ String disabled = _context.getProperty(PROP_DISABLED);
+ if (disabled == null)
+ disabled = DEFAULT_DISABLED + "";
+ _disabled = Boolean.valueOf(disabled).booleanValue();
+
+ String concurring = _context.getProperty(PROP_CONCURRING_SERVERS);
+ if (concurring == null) {
+ _concurringServers = DEFAULT_CONCURRING_SERVERS;
+ } else {
+ try {
+ int servers = Integer.parseInt(concurring);
+ if ( (servers > 0) && (servers < 5) )
+ _concurringServers = servers;
+ else
+ _concurringServers = DEFAULT_CONCURRING_SERVERS;
+ } catch (NumberFormatException nfe) {
+ _concurringServers = DEFAULT_CONCURRING_SERVERS;
+ }
+ }
+ }
+
+ public static void main(String args[]) {
+ System.setProperty(PROP_DISABLED, "false");
+ System.setProperty(PROP_QUERY_FREQUENCY, "30000");
+ I2PAppContext ctx = I2PAppContext.getGlobalContext();
+ long now = ctx.clock().now();
+ for (int i = 0; i < 5*60*1000; i += 61*1000) {
+ try { Thread.sleep(61*1000); } catch (InterruptedException ie) {}
+ }
+ }
+
+ /**
+ * Interface to receive update notifications for when we query the time
+ *
+ */
+ public interface UpdateListener {
+ /**
+ * The time has been queried and we have a current value for 'now'
+ *
+ */
+ public void setNow(long now);
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/util/BufferedRandomSource.java b/src/net/i2p/util/BufferedRandomSource.java
new file mode 100644
index 0000000..e344b5a
--- /dev/null
+++ b/src/net/i2p/util/BufferedRandomSource.java
@@ -0,0 +1,228 @@
+package net.i2p.util;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2005 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.security.SecureRandom;
+
+import net.i2p.I2PAppContext;
+import net.i2p.crypto.EntropyHarvester;
+import net.i2p.data.Base64;
+import net.i2p.data.DataHelper;
+
+/**
+ * Allocate data out of a large buffer of data, rather than the PRNG's
+ * (likely) small buffer to reduce the frequency of prng recalcs (though
+ * the recalcs are now more time consuming).
+ *
+ */
+public class BufferedRandomSource extends RandomSource {
+ private byte _buffer[];
+ private int _nextByte;
+ private int _nextBit;
+ private static volatile long _reseeds;
+
+ private static final int DEFAULT_BUFFER_SIZE = 256*1024;
+
+ public BufferedRandomSource(I2PAppContext context) {
+ this(context, DEFAULT_BUFFER_SIZE);
+ }
+ public BufferedRandomSource(I2PAppContext context, int bufferSize) {
+ super(context);
+ context.statManager().createRateStat("prng.reseedCount", "How many times the prng has been reseeded", "Encryption", new long[] { 60*1000, 10*60*1000, 60*60*1000 } );
+ _buffer = new byte[bufferSize];
+ refillBuffer();
+ // stagger reseeding
+ _nextByte = ((int)_reseeds-1) * 16 * 1024;
+ }
+
+ private final void refillBuffer() {
+ long before = System.currentTimeMillis();
+ doRefillBuffer();
+ long duration = System.currentTimeMillis() - before;
+ if ( (_reseeds % 1) == 0)
+ _context.statManager().addRateData("prng.reseedCount", _reseeds, duration);
+ }
+
+ private synchronized final void doRefillBuffer() {
+ super.nextBytes(_buffer);
+ _nextByte = 0;
+ _nextBit = 0;
+ _reseeds++;
+ }
+
+ private static final byte GOBBLE_MASK[] = { 0x0, // 0 bits
+ 0x1, // 1 bit
+ 0x3, // 2 bits
+ 0x7, // 3 bits
+ 0xF, // 4 bits
+ 0x1F, // 5 bits
+ 0x3F, // 6 bits
+ 0x7F, // 7 bits
+ (byte)0xFF // 8 bits
+ };
+
+ private synchronized final long nextBits(int numBits) {
+ if (false) {
+ long rv = 0;
+ for (int curBit = 0; curBit < numBits; curBit++) {
+ if (_nextBit >= 8) {
+ _nextBit = 0;
+ _nextByte++;
+ }
+ if (_nextByte >= _buffer.length)
+ refillBuffer();
+ rv += (_buffer[_nextByte] << curBit);
+ _nextBit++;
+ /*
+ int avail = 8 - _nextBit;
+ // this is not correct! (or is it?)
+ rv += (_buffer[_nextByte] << 8 - avail);
+ _nextBit += avail;
+ numBits -= avail;
+ if (_nextBit >= 8) {
+ _nextBit = 0;
+ _nextByte++;
+ }
+ */
+ }
+ return rv;
+ } else {
+ long rv = 0;
+ int curBit = 0;
+ while (curBit < numBits) {
+ if (_nextBit >= 8) {
+ _nextBit = 0;
+ _nextByte++;
+ }
+ if (_nextByte >= _buffer.length)
+ refillBuffer();
+ int gobbleBits = 8 - _nextBit;
+ int want = numBits - curBit;
+ if (gobbleBits > want)
+ gobbleBits = want;
+ curBit += gobbleBits;
+ int shift = 8 - _nextBit - gobbleBits;
+ int c = (_buffer[_nextByte] & (GOBBLE_MASK[gobbleBits] << shift));
+ rv += ((c >>> shift) << (curBit-gobbleBits));
+ _nextBit += gobbleBits;
+ }
+ return rv;
+ }
+ }
+
+ public synchronized final void nextBytes(byte buf[]) {
+ int outOffset = 0;
+ while (outOffset < buf.length) {
+ int availableBytes = _buffer.length - _nextByte - (_nextBit != 0 ? 1 : 0);
+ if (availableBytes <= 0)
+ refillBuffer();
+ int start = _buffer.length - availableBytes;
+ int writeSize = Math.min(buf.length - outOffset, availableBytes);
+ System.arraycopy(_buffer, start, buf, outOffset, writeSize);
+ outOffset += writeSize;
+ _nextByte += writeSize;
+ _nextBit = 0;
+ }
+ }
+
+ public final int nextInt(int n) {
+ if (n <= 0) return 0;
+ int val = ((int)nextBits(countBits(n))) % n;
+ if (val < 0)
+ return 0 - val;
+ else
+ return val;
+ }
+
+ public final int nextInt() { return nextInt(Integer.MAX_VALUE); }
+
+ /**
+ * Like the modified nextInt, nextLong(n) returns a random number from 0 through n,
+ * including 0, excluding n.
+ */
+ public final long nextLong(long n) {
+ if (n <= 0) return 0;
+ long val = nextBits(countBits(n)) % n;
+ if (val < 0)
+ return 0 - val;
+ else
+ return val;
+ }
+
+ public final long nextLong() { return nextLong(Long.MAX_VALUE); }
+
+ static final int countBits(long val) {
+ int rv = 0;
+ while (val > Integer.MAX_VALUE) {
+ rv += 31;
+ val >>>= 31;
+ }
+
+ while (val > 0) {
+ rv++;
+ val >>= 1;
+ }
+ return rv;
+ }
+
+ /**
+ * override as synchronized, for those JVMs that don't always pull via
+ * nextBytes (cough ibm)
+ */
+ public final boolean nextBoolean() {
+ return nextBits(1) != 0;
+ }
+
+ private static final double DOUBLE_DENOMENATOR = (double)(1L << 53);
+ /** defined per javadoc ( ((nextBits(26)<<27) + nextBits(27)) / (1 << 53)) */
+ public final double nextDouble() {
+ long top = (((long)nextBits(26) << 27) + nextBits(27));
+ return top / DOUBLE_DENOMENATOR;
+ }
+ private static final float FLOAT_DENOMENATOR = (float)(1 << 24);
+ /** defined per javadoc (nextBits(24) / ((float)(1 << 24)) ) */
+ public float nextFloat() {
+ long top = nextBits(24);
+ return top / FLOAT_DENOMENATOR;
+ }
+ public double nextGaussian() {
+ // bah, unbuffered
+ return super.nextGaussian();
+ }
+
+ public static void main(String args[]) {
+ for (int i = 0; i < 16; i++)
+ test();
+ }
+ private static void test() {
+ I2PAppContext ctx = I2PAppContext.getGlobalContext();
+ byte data[] = new byte[16*1024];
+ for (int i = 0; i < data.length; i += 4) {
+ long l = ctx.random().nextLong();
+ if (l < 0) l = 0 - l;
+ DataHelper.toLong(data, i, 4, l);
+ }
+ byte compressed[] = DataHelper.compress(data);
+ System.out.println("Data: " + data.length + "/" + compressed.length + ": " + toString(data));
+ }
+ private static final String toString(byte data[]) {
+ StringBuffer buf = new StringBuffer(data.length * 9);
+ for (int i = 0; i < data.length; i++) {
+ for (int j = 0; j < 8; j++) {
+ if ((data[i] & (1 << j)) != 0)
+ buf.append('1');
+ else
+ buf.append('0');
+ }
+ buf.append(' ');
+ }
+ return buf.toString();
+ }
+}
diff --git a/src/net/i2p/util/ByteCache.java b/src/net/i2p/util/ByteCache.java
new file mode 100644
index 0000000..19c6f5b
--- /dev/null
+++ b/src/net/i2p/util/ByteCache.java
@@ -0,0 +1,126 @@
+package net.i2p.util;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import net.i2p.I2PAppContext;
+import net.i2p.data.ByteArray;
+
+/**
+ * Cache the objects frequently used to reduce memory churn. The ByteArray
+ * should be held onto as long as the data referenced in it is needed.
+ *
+ */
+public final class ByteCache {
+ private static Map _caches = new HashMap(16);
+ /**
+ * Get a cache responsible for objects of the given size
+ *
+ * @param cacheSize how large we want the cache to grow before using on
+ * demand allocation
+ * @param size how large should the objects cached be?
+ */
+ public static ByteCache getInstance(int cacheSize, int size) {
+ Integer sz = new Integer(size);
+ ByteCache cache = null;
+ synchronized (_caches) {
+ if (!_caches.containsKey(sz))
+ _caches.put(sz, new ByteCache(cacheSize, size));
+ cache = (ByteCache)_caches.get(sz);
+ }
+ cache.resize(cacheSize);
+ return cache;
+ }
+ private Log _log;
+ /** list of available and available entries */
+ private List _available;
+ private int _maxCached;
+ private int _entrySize;
+ private long _lastOverflow;
+
+ /** do we actually want to cache? */
+ private static final boolean _cache = true;
+
+ /** how often do we cleanup the cache */
+ private static final int CLEANUP_FREQUENCY = 30*1000;
+ /** if we haven't exceeded the cache size in 2 minutes, cut our cache in half */
+ private static final long EXPIRE_PERIOD = 2*60*1000;
+
+ private ByteCache(int maxCachedEntries, int entrySize) {
+ if (_cache)
+ _available = new ArrayList(maxCachedEntries);
+ _maxCached = maxCachedEntries;
+ _entrySize = entrySize;
+ _lastOverflow = -1;
+ SimpleTimer.getInstance().addEvent(new Cleanup(), CLEANUP_FREQUENCY);
+ _log = I2PAppContext.getGlobalContext().logManager().getLog(ByteCache.class);
+ }
+
+ private void resize(int maxCachedEntries) {
+ if (_maxCached >= maxCachedEntries) return;
+ _maxCached = maxCachedEntries;
+ }
+
+ /**
+ * Get the next available structure, either from the cache or a brand new one
+ *
+ */
+ public final ByteArray acquire() {
+ if (_cache) {
+ synchronized (_available) {
+ if (_available.size() > 0)
+ return (ByteArray)_available.remove(0);
+ }
+ }
+ _lastOverflow = System.currentTimeMillis();
+ byte data[] = new byte[_entrySize];
+ ByteArray rv = new ByteArray(data);
+ rv.setValid(0);
+ rv.setOffset(0);
+ return rv;
+ }
+
+ /**
+ * Put this structure back onto the available cache for reuse
+ *
+ */
+ public final void release(ByteArray entry) {
+ release(entry, true);
+ }
+ public final void release(ByteArray entry, boolean shouldZero) {
+ if (_cache) {
+ if ( (entry == null) || (entry.getData() == null) )
+ return;
+
+ entry.setValid(0);
+ entry.setOffset(0);
+
+ if (shouldZero)
+ Arrays.fill(entry.getData(), (byte)0x0);
+ synchronized (_available) {
+ if (_available.size() < _maxCached)
+ _available.add(entry);
+ }
+ }
+ }
+
+ private class Cleanup implements SimpleTimer.TimedEvent {
+ public void timeReached() {
+ if (System.currentTimeMillis() - _lastOverflow > EXPIRE_PERIOD) {
+ // we haven't exceeded the cache size in a few minutes, so lets
+ // shrink the cache
+ synchronized (_available) {
+ int toRemove = _available.size() / 2;
+ for (int i = 0; i < toRemove; i++)
+ _available.remove(0);
+ if ( (toRemove > 0) && (_log.shouldLog(Log.DEBUG)) )
+ _log.debug("Removing " + toRemove + " cached entries of size " + _entrySize);
+ }
+ }
+ SimpleTimer.getInstance().addEvent(Cleanup.this, CLEANUP_FREQUENCY);
+ }
+ }
+}
diff --git a/src/net/i2p/util/CachingByteArrayOutputStream.java b/src/net/i2p/util/CachingByteArrayOutputStream.java
new file mode 100644
index 0000000..e1d3354
--- /dev/null
+++ b/src/net/i2p/util/CachingByteArrayOutputStream.java
@@ -0,0 +1,27 @@
+package net.i2p.util;
+
+import java.io.ByteArrayOutputStream;
+
+import net.i2p.data.ByteArray;
+
+/**
+ * simple extension to the baos to try to use a ByteCache for its
+ * internal buffer. This caching only works when the array size
+ * provided is sufficient for the entire buffer. After doing what
+ * needs to be done (e.g. write(foo); toByteArray();), call releaseBuffer
+ * to put the buffer back into the cache.
+ *
+ */
+public class CachingByteArrayOutputStream extends ByteArrayOutputStream {
+ private ByteCache _cache;
+ private ByteArray _buf;
+
+ public CachingByteArrayOutputStream(int cacheQuantity, int arraySize) {
+ super(0);
+ _cache = ByteCache.getInstance(cacheQuantity, arraySize);
+ _buf = _cache.acquire();
+ super.buf = _buf.getData();
+ }
+
+ public void releaseBuffer() { _cache.release(_buf); }
+}
diff --git a/src/net/i2p/util/Clock.java b/src/net/i2p/util/Clock.java
new file mode 100644
index 0000000..d3f56e2
--- /dev/null
+++ b/src/net/i2p/util/Clock.java
@@ -0,0 +1,147 @@
+package net.i2p.util;
+
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.Set;
+
+import net.i2p.I2PAppContext;
+import net.i2p.time.Timestamper;
+
+/**
+ * Alternate location for determining the time which takes into account an offset.
+ * This offset will ideally be periodically updated so as to serve as the difference
+ * between the local computer's current time and the time as known by some reference
+ * (such as an NTP synchronized clock).
+ *
+ */
+public class Clock implements Timestamper.UpdateListener {
+ private I2PAppContext _context;
+ private Timestamper _timestamper;
+ private long _startedOn;
+ private boolean _statCreated;
+
+ public Clock(I2PAppContext context) {
+ _context = context;
+ _offset = 0;
+ _alreadyChanged = false;
+ _listeners = new HashSet(64);
+ _timestamper = new Timestamper(context, this);
+ _startedOn = System.currentTimeMillis();
+ _statCreated = false;
+ }
+ public static Clock getInstance() {
+ return I2PAppContext.getGlobalContext().clock();
+ }
+
+ public Timestamper getTimestamper() { return _timestamper; }
+
+ /** we fetch it on demand to avoid circular dependencies (logging uses the clock) */
+ private Log getLog() { return _context.logManager().getLog(Clock.class); }
+
+ private volatile long _offset;
+ private boolean _alreadyChanged;
+ private Set _listeners;
+
+ /** if the clock is skewed by 3+ days, fuck 'em */
+ public final static long MAX_OFFSET = 3 * 24 * 60 * 60 * 1000;
+ /** after we've started up and shifted the clock, don't allow shifts of more than 10 minutes */
+ public final static long MAX_LIVE_OFFSET = 10 * 60 * 1000;
+ /** if the clock skewed changes by less than 1s, ignore the update (so we don't slide all over the place) */
+ public final static long MIN_OFFSET_CHANGE = 10 * 1000;
+
+ public void setOffset(long offsetMs) {
+ setOffset(offsetMs, false);
+ }
+
+ /**
+ * Specify how far away from the "correct" time the computer is - a positive
+ * value means that we are slow, while a negative value means we are fast.
+ *
+ */
+ public void setOffset(long offsetMs, boolean force) {
+ if (false) return;
+ long delta = offsetMs - _offset;
+ if (!force) {
+ if ((offsetMs > MAX_OFFSET) || (offsetMs < 0 - MAX_OFFSET)) {
+ getLog().error("Maximum offset shift exceeded [" + offsetMs + "], NOT HONORING IT");
+ return;
+ }
+
+ // only allow substantial modifications before the first 10 minutes
+ if (_alreadyChanged && (System.currentTimeMillis() - _startedOn > 10 * 60 * 1000)) {
+ if ( (delta > MAX_LIVE_OFFSET) || (delta < 0 - MAX_LIVE_OFFSET) ) {
+ getLog().log(Log.CRIT, "The clock has already been updated, but you want to change it by "
+ + delta + " to " + offsetMs + "? Did something break?");
+ return;
+ }
+ }
+
+ if ((delta < MIN_OFFSET_CHANGE) && (delta > 0 - MIN_OFFSET_CHANGE)) {
+ getLog().debug("Not changing offset since it is only " + delta + "ms");
+ _alreadyChanged = true;
+ return;
+ }
+ }
+ if (_alreadyChanged) {
+ if (delta > 15*1000)
+ getLog().log(Log.CRIT, "Updating clock offset to " + offsetMs + "ms from " + _offset + "ms");
+ else if (getLog().shouldLog(Log.INFO))
+ getLog().info("Updating clock offset to " + offsetMs + "ms from " + _offset + "ms");
+
+ if (!_statCreated)
+ _context.statManager().createRateStat("clock.skew", "How far is the already adjusted clock being skewed?", "Clock", new long[] { 10*60*1000, 3*60*60*1000, 24*60*60*60 });
+ _statCreated = true;
+ _context.statManager().addRateData("clock.skew", delta, 0);
+ } else {
+ getLog().log(Log.INFO, "Initializing clock offset to " + offsetMs + "ms from " + _offset + "ms");
+ }
+ _alreadyChanged = true;
+ _offset = offsetMs;
+ fireOffsetChanged(delta);
+ }
+
+ public long getOffset() {
+ return _offset;
+ }
+
+ public boolean getUpdatedSuccessfully() { return _alreadyChanged; }
+
+ public void setNow(long realTime) {
+ long diff = realTime - System.currentTimeMillis();
+ setOffset(diff);
+ }
+
+ /**
+ * Retrieve the current time synchronized with whatever reference clock is in
+ * use.
+ *
+ */
+ public long now() {
+ return _offset + System.currentTimeMillis();
+ }
+
+ public void addUpdateListener(ClockUpdateListener lsnr) {
+ synchronized (_listeners) {
+ _listeners.add(lsnr);
+ }
+ }
+
+ public void removeUpdateListener(ClockUpdateListener lsnr) {
+ synchronized (_listeners) {
+ _listeners.remove(lsnr);
+ }
+ }
+
+ private void fireOffsetChanged(long delta) {
+ synchronized (_listeners) {
+ for (Iterator iter = _listeners.iterator(); iter.hasNext();) {
+ ClockUpdateListener lsnr = (ClockUpdateListener) iter.next();
+ lsnr.offsetChanged(delta);
+ }
+ }
+ }
+
+ public static interface ClockUpdateListener {
+ public void offsetChanged(long delta);
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/util/EepGet.java b/src/net/i2p/util/EepGet.java
new file mode 100644
index 0000000..f3e58d0
--- /dev/null
+++ b/src/net/i2p/util/EepGet.java
@@ -0,0 +1,744 @@
+package net.i2p.util;
+
+import java.io.File;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.net.MalformedURLException;
+import java.net.Socket;
+import java.net.URL;
+import java.text.DecimalFormat;
+import java.text.NumberFormat;
+import java.util.ArrayList;
+import java.util.Date;
+import java.util.List;
+import java.util.StringTokenizer;
+import java.util.Properties;
+
+import net.i2p.I2PAppContext;
+import net.i2p.data.DataHelper;
+
+/**
+ * EepGet [-p localhost:4444]
+ * [-n #retries]
+ * [-o outputFile]
+ * [-m markSize lineLen]
+ * url
+ */
+public class EepGet {
+ private I2PAppContext _context;
+ private Log _log;
+ private boolean _shouldProxy;
+ private String _proxyHost;
+ private int _proxyPort;
+ private int _numRetries;
+ private String _outputFile;
+ private String _url;
+ private String _postData;
+ private boolean _allowCaching;
+ private List _listeners;
+
+ private boolean _keepFetching;
+ private Socket _proxy;
+ private OutputStream _proxyOut;
+ private InputStream _proxyIn;
+ private OutputStream _out;
+ private long _alreadyTransferred;
+ private long _bytesTransferred;
+ private long _bytesRemaining;
+ private int _currentAttempt;
+ private String _etag;
+ private boolean _encodingChunked;
+ private boolean _notModified;
+ private String _contentType;
+ private boolean _transferFailed;
+ private boolean _headersRead;
+ private boolean _aborted;
+ private long _fetchHeaderTimeout;
+
+ public EepGet(I2PAppContext ctx, String proxyHost, int proxyPort, int numRetries, String outputFile, String url) {
+ this(ctx, true, proxyHost, proxyPort, numRetries, outputFile, url);
+ }
+ public EepGet(I2PAppContext ctx, String proxyHost, int proxyPort, int numRetries, String outputFile, String url, boolean allowCaching) {
+ this(ctx, true, proxyHost, proxyPort, numRetries, outputFile, url, allowCaching, null);
+ }
+ public EepGet(I2PAppContext ctx, int numRetries, String outputFile, String url) {
+ this(ctx, false, null, -1, numRetries, outputFile, url);
+ }
+ public EepGet(I2PAppContext ctx, int numRetries, String outputFile, String url, boolean allowCaching) {
+ this(ctx, false, null, -1, numRetries, outputFile, url, allowCaching, null);
+ }
+ public EepGet(I2PAppContext ctx, boolean shouldProxy, String proxyHost, int proxyPort, int numRetries, String outputFile, String url) {
+ this(ctx, shouldProxy, proxyHost, proxyPort, numRetries, outputFile, url, true, null);
+ }
+ public EepGet(I2PAppContext ctx, boolean shouldProxy, String proxyHost, int proxyPort, int numRetries, String outputFile, String url, String postData) {
+ this(ctx, shouldProxy, proxyHost, proxyPort, numRetries, outputFile, url, true, null, postData);
+ }
+ public EepGet(I2PAppContext ctx, boolean shouldProxy, String proxyHost, int proxyPort, int numRetries, String outputFile, String url, boolean allowCaching, String etag) {
+ this(ctx, shouldProxy, proxyHost, proxyPort, numRetries, outputFile, url, allowCaching, etag, null);
+ }
+ public EepGet(I2PAppContext ctx, boolean shouldProxy, String proxyHost, int proxyPort, int numRetries, String outputFile, String url, boolean allowCaching, String etag, String postData) {
+ _context = ctx;
+ _log = ctx.logManager().getLog(EepGet.class);
+ _shouldProxy = shouldProxy;
+ _proxyHost = proxyHost;
+ _proxyPort = proxyPort;
+ _numRetries = numRetries;
+ _outputFile = outputFile;
+ _url = url;
+ _postData = postData;
+ _alreadyTransferred = 0;
+ _bytesTransferred = 0;
+ _bytesRemaining = -1;
+ _currentAttempt = 0;
+ _transferFailed = false;
+ _headersRead = false;
+ _aborted = false;
+ _fetchHeaderTimeout = 30*1000;
+ _listeners = new ArrayList(1);
+ _etag = etag;
+ }
+
+ /**
+ * EepGet [-p localhost:4444] [-n #retries] [-e etag] [-o outputFile] [-m markSize lineLen] url
+ *
+ */
+ public static void main(String args[]) {
+ String proxyHost = "localhost";
+ int proxyPort = 4444;
+ int numRetries = 5;
+ int markSize = 1024;
+ int lineLen = 40;
+ String etag = null;
+ String saveAs = null;
+ String url = null;
+ try {
+ for (int i = 0; i < args.length; i++) {
+ if (args[i].equals("-p")) {
+ proxyHost = args[i+1].substring(0, args[i+1].indexOf(':'));
+ String port = args[i+1].substring(args[i+1].indexOf(':')+1);
+ proxyPort = Integer.parseInt(port);
+ i++;
+ } else if (args[i].equals("-n")) {
+ numRetries = Integer.parseInt(args[i+1]);
+ i++;
+ } else if (args[i].equals("-e")) {
+ etag = "\"" + args[i+1] + "\"";
+ i++;
+ } else if (args[i].equals("-o")) {
+ saveAs = args[i+1];
+ i++;
+ } else if (args[i].equals("-m")) {
+ markSize = Integer.parseInt(args[i+1]);
+ lineLen = Integer.parseInt(args[i+2]);
+ i += 2;
+ } else {
+ url = args[i];
+ }
+ }
+ } catch (Exception e) {
+ e.printStackTrace();
+ usage();
+ return;
+ }
+
+ if (url == null) {
+ usage();
+ return;
+ }
+ if (saveAs == null)
+ saveAs = suggestName(url);
+
+ EepGet get = new EepGet(I2PAppContext.getGlobalContext(), true, proxyHost, proxyPort, numRetries, saveAs, url, true, etag);
+ get.addStatusListener(get.new CLIStatusListener(markSize, lineLen));
+ get.fetch();
+ }
+
+ public static String suggestName(String url) {
+ int last = url.lastIndexOf('/');
+ if ((last < 0) || (url.lastIndexOf('#') > last))
+ last = url.lastIndexOf('#');
+ if ((last < 0) || (url.lastIndexOf('?') > last))
+ last = url.lastIndexOf('?');
+ if ((last < 0) || (url.lastIndexOf('=') > last))
+ last = url.lastIndexOf('=');
+
+ String name = null;
+ if (last >= 0)
+ name = sanitize(url.substring(last+1));
+ if ( (name != null) && (name.length() > 0) )
+ return name;
+ else
+ return sanitize(url);
+ }
+
+ private static final String _safeChars = "abcdefghijklmnopqrstuvwxyz" +
+ "ABCDEFGHIJKLMNOPQRSTUVWXYZ" +
+ "01234567890.,_=@#:";
+ private static String sanitize(String name) {
+ name = name.replace('/', '_');
+ StringBuffer buf = new StringBuffer(name);
+ for (int i = 0; i < name.length(); i++)
+ if (_safeChars.indexOf(buf.charAt(i)) == -1)
+ buf.setCharAt(i, '_');
+ return buf.toString();
+ }
+
+ private static void usage() {
+ System.err.println("EepGet [-p localhost:4444] [-n #retries] [-o outputFile] [-m markSize lineLen] url");
+ }
+
+ public static interface StatusListener {
+ public void bytesTransferred(long alreadyTransferred, int currentWrite, long bytesTransferred, long bytesRemaining, String url);
+ public void transferComplete(long alreadyTransferred, long bytesTransferred, long bytesRemaining, String url, String outputFile, boolean notModified);
+ public void attemptFailed(String url, long bytesTransferred, long bytesRemaining, int currentAttempt, int numRetries, Exception cause);
+ public void transferFailed(String url, long bytesTransferred, long bytesRemaining, int currentAttempt);
+ public void headerReceived(String url, int currentAttempt, String key, String val);
+ public void attempting(String url);
+ }
+ private class CLIStatusListener implements StatusListener {
+ private int _markSize;
+ private int _lineSize;
+ private long _startedOn;
+ private long _written;
+ private long _lastComplete;
+ private DecimalFormat _pct = new DecimalFormat("00.0%");
+ private DecimalFormat _kbps = new DecimalFormat("###,000.00");
+ public CLIStatusListener() {
+ this(1024, 40);
+ }
+ public CLIStatusListener(int markSize, int lineSize) {
+ _markSize = markSize;
+ _lineSize = lineSize;
+ _written = 0;
+ _lastComplete = _context.clock().now();
+ _startedOn = _lastComplete;
+ }
+ public void bytesTransferred(long alreadyTransferred, int currentWrite, long bytesTransferred, long bytesRemaining, String url) {
+ for (int i = 0; i < currentWrite; i++) {
+ _written++;
+ if ( (_markSize > 0) && (_written % _markSize == 0) ) {
+ System.out.print("#");
+
+ if ( (_lineSize > 0) && (_written % ((long)_markSize*(long)_lineSize) == 0l) ) {
+ long now = _context.clock().now();
+ long timeToSend = now - _lastComplete;
+ if (timeToSend > 0) {
+ StringBuffer buf = new StringBuffer(50);
+ buf.append(" ");
+ if ( bytesRemaining > 0 ) {
+ double pct = ((double)alreadyTransferred + (double)_written) / ((double)alreadyTransferred + (double)bytesRemaining);
+ synchronized (_pct) {
+ buf.append(_pct.format(pct));
+ }
+ buf.append(": ");
+ }
+ buf.append(_written+alreadyTransferred);
+ buf.append(" @ ");
+ double lineKBytes = ((double)_markSize * (double)_lineSize)/1024.0d;
+ double kbps = lineKBytes/((double)timeToSend/1000.0d);
+ synchronized (_kbps) {
+ buf.append(_kbps.format(kbps));
+ }
+ buf.append("KBps");
+
+ buf.append(" / ");
+ long lifetime = _context.clock().now() - _startedOn;
+ double lifetimeKBps = (1000.0d*(double)(_written+alreadyTransferred)/((double)lifetime*1024.0d));
+ synchronized (_kbps) {
+ buf.append(_kbps.format(lifetimeKBps));
+ }
+ buf.append("KBps");
+ System.out.println(buf.toString());
+ }
+ _lastComplete = now;
+ }
+ }
+ }
+ }
+ public void transferComplete(long alreadyTransferred, long bytesTransferred, long bytesRemaining, String url, String outputFile, boolean notModified) {
+ System.out.println();
+ System.out.println("== " + new Date());
+ if (notModified) {
+ System.out.println("== Source not modified since last download");
+ } else {
+ if ( bytesRemaining > 0 ) {
+ System.out.println("== Transfer of " + url + " completed with " + (alreadyTransferred+bytesTransferred)
+ + " and " + (bytesRemaining - bytesTransferred) + " remaining");
+ System.out.println("== Output saved to " + outputFile);
+ } else {
+ System.out.println("== Transfer of " + url + " completed with " + (alreadyTransferred+bytesTransferred)
+ + " bytes transferred");
+ System.out.println("== Output saved to " + outputFile);
+ }
+ }
+ long timeToSend = _context.clock().now() - _startedOn;
+ System.out.println("== Transfer time: " + DataHelper.formatDuration(timeToSend));
+ System.out.println("== ETag: " + _etag);
+ StringBuffer buf = new StringBuffer(50);
+ buf.append("== Transfer rate: ");
+ double kbps = (1000.0d*(double)(_written)/((double)timeToSend*1024.0d));
+ synchronized (_kbps) {
+ buf.append(_kbps.format(kbps));
+ }
+ buf.append("KBps");
+ System.out.println(buf.toString());
+ }
+ public void attemptFailed(String url, long bytesTransferred, long bytesRemaining, int currentAttempt, int numRetries, Exception cause) {
+ System.out.println();
+ System.out.println("** " + new Date());
+ System.out.println("** Attempt " + currentAttempt + " of " + url + " failed");
+ System.out.println("** Transfered " + bytesTransferred
+ + " with " + (bytesRemaining < 0 ? "unknown" : ""+bytesRemaining) + " remaining");
+ System.out.println("** " + cause.getMessage());
+ _written = 0;
+ }
+ public void transferFailed(String url, long bytesTransferred, long bytesRemaining, int currentAttempt) {
+ System.out.println("== " + new Date());
+ System.out.println("== Transfer of " + url + " failed after " + currentAttempt + " attempts");
+ System.out.println("== Transfer size: " + bytesTransferred + " with "
+ + (bytesRemaining < 0 ? "unknown" : ""+bytesRemaining) + " remaining");
+ long timeToSend = _context.clock().now() - _startedOn;
+ System.out.println("== Transfer time: " + DataHelper.formatDuration(timeToSend));
+ double kbps = (timeToSend > 0 ? (1000.0d*(double)(bytesTransferred)/((double)timeToSend*1024.0d)) : 0);
+ StringBuffer buf = new StringBuffer(50);
+ buf.append("== Transfer rate: ");
+ synchronized (_kbps) {
+ buf.append(_kbps.format(kbps));
+ }
+ buf.append("KBps");
+ System.out.println(buf.toString());
+ }
+ public void attempting(String url) {}
+ public void headerReceived(String url, int currentAttempt, String key, String val) {}
+ }
+
+ public void addStatusListener(StatusListener lsnr) {
+ synchronized (_listeners) { _listeners.add(lsnr); }
+ }
+
+ public void stopFetching() { _keepFetching = false; }
+ /**
+ * Blocking fetch, returning true if the URL was retrieved, false if all retries failed
+ *
+ */
+ public boolean fetch() { return fetch(_fetchHeaderTimeout); }
+ /**
+ * Blocking fetch, timing out individual attempts if the HTTP response headers
+ * don't come back in the time given. If the timeout is zero or less, this will
+ * wait indefinitely.
+ */
+ public boolean fetch(long fetchHeaderTimeout) {
+ _fetchHeaderTimeout = fetchHeaderTimeout;
+ _keepFetching = true;
+
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Fetching (proxied? " + _shouldProxy + ") url=" + _url);
+ while (_keepFetching) {
+ try {
+ for (int i = 0; i < _listeners.size(); i++)
+ ((StatusListener)_listeners.get(i)).attempting(_url);
+ sendRequest();
+ doFetch();
+ return true;
+ } catch (IOException ioe) {
+ for (int i = 0; i < _listeners.size(); i++)
+ ((StatusListener)_listeners.get(i)).attemptFailed(_url, _bytesTransferred, _bytesRemaining, _currentAttempt, _numRetries, ioe);
+ } finally {
+ if (_out != null) {
+ try {
+ _out.close();
+ } catch (IOException cioe) {}
+ _out = null;
+ }
+ if (_proxy != null) {
+ try {
+ _proxy.close();
+ _proxy = null;
+ } catch (IOException ioe) {}
+ }
+ }
+
+ _currentAttempt++;
+ if (_currentAttempt > _numRetries)
+ break;
+ try { Thread.sleep(5*1000); } catch (InterruptedException ie) {}
+ }
+
+ for (int i = 0; i < _listeners.size(); i++)
+ ((StatusListener)_listeners.get(i)).transferFailed(_url, _bytesTransferred, _bytesRemaining, _currentAttempt);
+ return false;
+ }
+
+ private class DisconnectIfNoHeaders implements SimpleTimer.TimedEvent {
+ public void timeReached() {
+ if (_headersRead) {
+ // cool. noop
+ } else {
+ _aborted = true;
+ if (_proxyIn != null)
+ try { _proxyIn.close(); } catch (IOException ioe) {}
+ _proxyIn = null;
+ if (_proxyOut != null)
+ try { _proxyOut.close(); } catch (IOException ioe) {}
+ _proxyOut = null;
+ if (_proxy != null)
+ try { _proxy.close(); } catch (IOException ioe) {}
+ _proxy = null;
+ }
+ }
+ }
+
+ /** return true if the URL was completely retrieved */
+ private void doFetch() throws IOException {
+ _headersRead = false;
+ _aborted = false;
+ if (_fetchHeaderTimeout > 0)
+ SimpleTimer.getInstance().addEvent(new DisconnectIfNoHeaders(), _fetchHeaderTimeout);
+ try {
+ readHeaders();
+ } finally {
+ _headersRead = true;
+ }
+ if (_aborted)
+ throw new IOException("Timed out reading the HTTP headers");
+
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Headers read completely, reading " + _bytesRemaining);
+
+ boolean strictSize = (_bytesRemaining >= 0);
+
+ int remaining = (int)_bytesRemaining;
+ byte buf[] = new byte[1024];
+ while (_keepFetching && ( (remaining > 0) || !strictSize )) {
+ int toRead = buf.length;
+ if (strictSize && toRead > remaining)
+ toRead = remaining;
+ int read = _proxyIn.read(buf, 0, toRead);
+ if (read == -1)
+ break;
+ _out.write(buf, 0, read);
+ _bytesTransferred += read;
+ remaining -= read;
+ if (remaining==0 && _encodingChunked) {
+ int char1 = _proxyIn.read();
+ if (char1 == '\r') {
+ int char2 = _proxyIn.read();
+ if (char2 == '\n') {
+ remaining = (int) readChunkLength();
+ } else {
+ _out.write(char1);
+ _out.write(char2);
+ _bytesTransferred += 2;
+ remaining -= 2;
+ read += 2;
+ }
+ } else {
+ _out.write(char1);
+ _bytesTransferred++;
+ remaining--;
+ read++;
+ }
+ }
+ if (read > 0)
+ for (int i = 0; i < _listeners.size(); i++)
+ ((StatusListener)_listeners.get(i)).bytesTransferred(
+ _alreadyTransferred,
+ read,
+ _bytesTransferred,
+ _encodingChunked?-1:_bytesRemaining,
+ _url);
+ }
+
+ if (_out != null)
+ _out.close();
+ _out = null;
+
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Done transferring " + _bytesTransferred);
+
+ if (_transferFailed) {
+ // 404, etc
+ for (int i = 0; i < _listeners.size(); i++)
+ ((StatusListener)_listeners.get(i)).transferFailed(_url, _bytesTransferred, _bytesRemaining, _currentAttempt);
+ } else if ( (_bytesRemaining == -1) || (remaining == 0) ) {
+ for (int i = 0; i < _listeners.size(); i++)
+ ((StatusListener)_listeners.get(i)).transferComplete(
+ _alreadyTransferred,
+ _bytesTransferred,
+ _encodingChunked?-1:_bytesRemaining,
+ _url,
+ _outputFile,
+ _notModified);
+ } else {
+ throw new IOException("Disconnection on attempt " + _currentAttempt + " after " + _bytesTransferred);
+ }
+ }
+
+ private void readHeaders() throws IOException {
+ String key = null;
+ StringBuffer buf = new StringBuffer(32);
+
+ boolean read = DataHelper.readLine(_proxyIn, buf);
+ if (!read) throw new IOException("Unable to read the first line");
+ int responseCode = handleStatus(buf.toString());
+
+ boolean rcOk = false;
+ switch (responseCode) {
+ case 200: // full
+ _out = new FileOutputStream(_outputFile, false);
+ _alreadyTransferred = 0;
+ rcOk = true;
+ break;
+ case 206: // partial
+ _out = new FileOutputStream(_outputFile, true);
+ rcOk = true;
+ break;
+ case 304: // not modified
+ _bytesRemaining = 0;
+ _keepFetching = false;
+ _notModified = true;
+ return;
+ case 404: // not found
+ _keepFetching = false;
+ _transferFailed = true;
+ return;
+ case 416: // completed (or range out of reach)
+ _bytesRemaining = 0;
+ _keepFetching = false;
+ return;
+ default:
+ rcOk = false;
+ _transferFailed = true;
+ }
+ buf.setLength(0);
+ byte lookahead[] = new byte[3];
+ while (true) {
+ int cur = _proxyIn.read();
+ switch (cur) {
+ case -1:
+ throw new IOException("Headers ended too soon");
+ case ':':
+ if (key == null) {
+ key = buf.toString();
+ buf.setLength(0);
+ increment(lookahead, cur);
+ break;
+ } else {
+ buf.append((char)cur);
+ increment(lookahead, cur);
+ break;
+ }
+ case '\n':
+ case '\r':
+ if (key != null)
+ handle(key, buf.toString());
+
+ buf.setLength(0);
+ key = null;
+ increment(lookahead, cur);
+ if (isEndOfHeaders(lookahead)) {
+ if (!rcOk)
+ throw new IOException("Invalid HTTP response code: " + responseCode);
+ if (_encodingChunked) {
+ _bytesRemaining = readChunkLength();
+ }
+ return;
+ }
+ break;
+ default:
+ buf.append((char)cur);
+ increment(lookahead, cur);
+ }
+
+ if (buf.length() > 1024)
+ throw new IOException("Header line too long: " + buf.toString());
+ }
+ }
+
+ private long readChunkLength() throws IOException {
+ StringBuffer buf = new StringBuffer(8);
+ int nl = 0;
+ while (true) {
+ int cur = _proxyIn.read();
+ switch (cur) {
+ case -1:
+ throw new IOException("Chunk ended too soon");
+ case '\n':
+ case '\r':
+ nl++;
+ default:
+ buf.append((char)cur);
+ }
+
+ if (nl >= 2)
+ break;
+ }
+
+ String len = buf.toString().trim();
+ try {
+ long bytes = Long.parseLong(len, 16);
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Chunked length: " + bytes);
+ return bytes;
+ } catch (NumberFormatException nfe) {
+ throw new IOException("Invalid chunk length [" + len + "]");
+ }
+ }
+
+ /**
+ * parse the first status line and grab the response code.
+ * e.g. "HTTP/1.1 206 OK" vs "HTTP/1.1 200 OK" vs
+ * "HTTP/1.1 404 NOT FOUND", etc.
+ *
+ * @return HTTP response code (200, 206, other)
+ */
+ private int handleStatus(String line) {
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Status line: [" + line + "]");
+ StringTokenizer tok = new StringTokenizer(line, " ");
+ if (!tok.hasMoreTokens()) {
+ if (_log.shouldLog(Log.WARN))
+ _log.warn("ERR: status "+ line);
+ return -1;
+ }
+ String protocol = tok.nextToken(); // ignored
+ if (!tok.hasMoreTokens()) {
+ if (_log.shouldLog(Log.WARN))
+ _log.warn("ERR: status "+ line);
+ return -1;
+ }
+ String rc = tok.nextToken();
+ try {
+ return Integer.parseInt(rc);
+ } catch (NumberFormatException nfe) {
+ if (_log.shouldLog(Log.WARN))
+ _log.warn("ERR: status is invalid: " + line, nfe);
+ return -1;
+ }
+ }
+
+ private void handle(String key, String val) {
+ for (int i = 0; i < _listeners.size(); i++)
+ ((StatusListener)_listeners.get(i)).headerReceived(_url, _currentAttempt, key.trim(), val.trim());
+
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Header line: [" + key + "] = [" + val + "]");
+ if (key.equalsIgnoreCase("Content-length")) {
+ try {
+ _bytesRemaining = Long.parseLong(val.trim());
+ } catch (NumberFormatException nfe) {
+ nfe.printStackTrace();
+ }
+ } else if (key.equalsIgnoreCase("ETag")) {
+ _etag = val.trim();
+ } else if (key.equalsIgnoreCase("Transfer-encoding")) {
+ if (val.indexOf("chunked") != -1)
+ _encodingChunked = true;
+ } else if (key.equalsIgnoreCase("Content-Type")) {
+ _contentType=val;
+ } else {
+ // ignore the rest
+ }
+ }
+
+ private void increment(byte[] lookahead, int cur) {
+ lookahead[0] = lookahead[1];
+ lookahead[1] = lookahead[2];
+ lookahead[2] = (byte)cur;
+ }
+ private boolean isEndOfHeaders(byte lookahead[]) {
+ byte first = lookahead[0];
+ byte second = lookahead[1];
+ byte third = lookahead[2];
+ return (isNL(second) && isNL(third)) || // \n\n
+ (isNL(first) && isNL(third)); // \n\r\n
+ }
+
+ /** we ignore any potential \r, since we trim it on write anyway */
+ private static final byte NL = '\n';
+ private boolean isNL(byte b) { return (b == NL); }
+
+ private void sendRequest() throws IOException {
+ File outFile = new File(_outputFile);
+ if (outFile.exists())
+ _alreadyTransferred = outFile.length();
+
+ String req = getRequest();
+
+ if (_shouldProxy) {
+ _proxy = new Socket(_proxyHost, _proxyPort);
+ } else {
+ try {
+ URL url = new URL(_url);
+ String host = url.getHost();
+ int port = url.getPort();
+ if (port == -1)
+ port = 80;
+ _proxy = new Socket(host, port);
+ } catch (MalformedURLException mue) {
+ throw new IOException("Request URL is invalid");
+ }
+ }
+ _proxyIn = _proxy.getInputStream();
+ _proxyOut = _proxy.getOutputStream();
+
+ _proxyOut.write(DataHelper.getUTF8(req.toString()));
+ _proxyOut.flush();
+
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Request flushed");
+ }
+
+ private String getRequest() throws IOException {
+ StringBuffer buf = new StringBuffer(512);
+ boolean post = false;
+ if ( (_postData != null) && (_postData.length() > 0) )
+ post = true;
+ if (post) {
+ buf.append("POST ").append(_url).append(" HTTP/1.1\r\n");
+ } else {
+ buf.append("GET ").append(_url).append(" HTTP/1.1\r\n");
+ }
+ URL url = new URL(_url);
+ buf.append("Host: ").append(url.getHost()).append("\r\n");
+ if (_alreadyTransferred > 0) {
+ buf.append("Range: bytes=");
+ buf.append(_alreadyTransferred);
+ buf.append("-\r\n");
+ }
+ buf.append("Accept-Encoding: \r\n");
+ buf.append("X-Accept-Encoding: x-i2p-gzip;q=1.0, identity;q=0.5, deflate;q=0, gzip;q=0, *;q=0\r\n");
+ if (!_allowCaching) {
+ buf.append("Cache-control: no-cache\r\n");
+ buf.append("Pragma: no-cache\r\n");
+ }
+ if (_etag != null) {
+ buf.append("If-None-Match: ");
+ buf.append(_etag);
+ buf.append("\r\n");
+ }
+ if (post)
+ buf.append("Content-length: ").append(_postData.length()).append("\r\n");
+ buf.append("Connection: close\r\n\r\n");
+ if (post)
+ buf.append(_postData);
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Request: [" + buf.toString() + "]");
+ return buf.toString();
+ }
+
+ public String getETag() {
+ return _etag;
+ }
+
+ public boolean getNotModified() {
+ return _notModified;
+ }
+
+ public String getContentType() {
+ return _contentType;
+ }
+
+}
diff --git a/src/net/i2p/util/EepGetScheduler.java b/src/net/i2p/util/EepGetScheduler.java
new file mode 100644
index 0000000..9c09d0a
--- /dev/null
+++ b/src/net/i2p/util/EepGetScheduler.java
@@ -0,0 +1,84 @@
+package net.i2p.util;
+
+import java.io.File;
+import java.util.ArrayList;
+import java.util.List;
+import net.i2p.I2PAppContext;
+
+/**
+ *
+ */
+public class EepGetScheduler implements EepGet.StatusListener {
+ private I2PAppContext _context;
+ private List _urls;
+ private List _localFiles;
+ private String _proxyHost;
+ private int _proxyPort;
+ private int _curURL;
+ private EepGet.StatusListener _listener;
+
+ public EepGetScheduler(I2PAppContext ctx, List urls, List localFiles, String proxyHost, int proxyPort, EepGet.StatusListener lsnr) {
+ _context = ctx;
+ _urls = urls;
+ _localFiles = localFiles;
+ _proxyHost = proxyHost;
+ _proxyPort = proxyPort;
+ _curURL = -1;
+ _listener = lsnr;
+ }
+
+ public void fetch() {
+ I2PThread t = new I2PThread(new Runnable() { public void run() { fetchNext(); } }, "EepGetScheduler");
+ t.setDaemon(true);
+ t.start();
+ }
+
+ public void fetch(boolean shouldBlock) {
+ //Checking for a valid index is done in fetchNext, so we don't have to worry about it.
+ if (shouldBlock) {
+ while (_curURL < _urls.size())
+ fetchNext();
+ } else {
+ fetch();
+ }
+ }
+
+ private void fetchNext() {
+ _curURL++;
+ if (_curURL >= _urls.size()) return;
+ String url = (String)_urls.get(_curURL);
+ String out = EepGet.suggestName(url);
+ if ( (_localFiles != null) && (_localFiles.size() > _curURL) ) {
+ File f = (File)_localFiles.get(_curURL);
+ out = f.getAbsolutePath();
+ } else {
+ if (_localFiles == null)
+ _localFiles = new ArrayList(_urls.size());
+ _localFiles.add(new File(out));
+ }
+ EepGet get = new EepGet(_context, ((_proxyHost != null) && (_proxyPort > 0)), _proxyHost, _proxyPort, 0, out, url);
+ get.addStatusListener(this);
+ get.fetch();
+ }
+
+ public void attemptFailed(String url, long bytesTransferred, long bytesRemaining, int currentAttempt, int numRetries, Exception cause) {
+ _listener.attemptFailed(url, bytesTransferred, bytesRemaining, currentAttempt, numRetries, cause);
+ }
+
+ public void bytesTransferred(long alreadyTransferred, int currentWrite, long bytesTransferred, long bytesRemaining, String url) {
+ _listener.bytesTransferred(alreadyTransferred, currentWrite, bytesTransferred, bytesRemaining, url);
+ }
+
+ public void transferComplete(long alreadyTransferred, long bytesTransferred, long bytesRemaining, String url, String outputFile, boolean notModified) {
+ _listener.transferComplete(alreadyTransferred, bytesTransferred, bytesRemaining, url, outputFile, notModified);
+ fetchNext();
+ }
+
+ public void transferFailed(String url, long bytesTransferred, long bytesRemaining, int currentAttempt) {
+ _listener.transferFailed(url, bytesTransferred, bytesRemaining, currentAttempt);
+ fetchNext();
+ }
+ public void attempting(String url) { _listener.attempting(url); }
+
+ public void headerReceived(String url, int attemptNum, String key, String val) {}
+}
diff --git a/src/net/i2p/util/EepPost.java b/src/net/i2p/util/EepPost.java
new file mode 100644
index 0000000..3b45280
--- /dev/null
+++ b/src/net/i2p/util/EepPost.java
@@ -0,0 +1,212 @@
+package net.i2p.util;
+
+import java.io.*;
+import java.net.*;
+import java.util.*;
+import net.i2p.I2PAppContext;
+import net.i2p.util.Log;
+
+/**
+ * Simple helper for uploading files and such via HTTP POST (rfc 1867)
+ *
+ */
+public class EepPost {
+ private I2PAppContext _context;
+ private Log _log;
+ private static final String CRLF = "\r\n";
+
+ public EepPost() {
+ this(I2PAppContext.getGlobalContext());
+ }
+ public EepPost(I2PAppContext ctx) {
+ _context = ctx;
+ _log = ctx.logManager().getLog(EepPost.class);
+ }
+
+ public static void main(String args[]) {
+ EepPost e = new EepPost();
+ Map fields = new HashMap();
+ fields.put("key", "value");
+ fields.put("key1", "value1");
+ fields.put("key2", "value2");
+ fields.put("blogpost0", new File("/home/i2p/1.snd"));
+ fields.put("blogpost1", new File("/home/i2p/2.snd"));
+ fields.put("blogpost2", new File("/home/i2p/2.snd"));
+ fields.put("blogpost3", new File("/home/i2p/2.snd"));
+ fields.put("blogpost4", new File("/home/i2p/2.snd"));
+ fields.put("blogpost5", new File("/home/i2p/2.snd"));
+ e.postFiles("http://localhost:7653/import.jsp", null, -1, fields, null);
+ //e.postFiles("http://localhost/cgi-bin/read.pl", null, -1, fields, null);
+ //e.postFiles("http://localhost:2001/import.jsp", null, -1, fields, null);
+ }
+ /**
+ * Submit an HTTP POST to the given URL (using the proxy if specified),
+ * uploading the given fields. If the field's value is a File object, then
+ * that file is uploaded, and if the field's value is a String object, the
+ * value is posted for that particular field. Multiple values for one
+ * field name is not currently supported.
+ *
+ */
+ public void postFiles(String url, String proxyHost, int proxyPort, Map fields, Runnable onCompletion) {
+ I2PThread postThread = new I2PThread(new Runner(url, proxyHost, proxyPort, fields, onCompletion));
+ postThread.start();
+ }
+
+ private class Runner implements Runnable {
+ private String _url;
+ private String _proxyHost;
+ private int _proxyPort;
+ private Map _fields;
+ private Runnable _onCompletion;
+ public Runner(String url, String proxy, int port, Map fields, Runnable onCompletion) {
+ _url = url;
+ _proxyHost = proxy;
+ _proxyPort = port;
+ _fields = fields;
+ _onCompletion = onCompletion;
+ }
+ public void run() {
+ if (_log.shouldLog(Log.DEBUG)) _log.debug("Running the post task");
+ Socket s = null;
+ try {
+ URL u = new URL(_url);
+ String h = u.getHost();
+ int p = u.getPort();
+ if (p <= 0)
+ p = 80;
+ String path = u.getPath();
+
+ boolean isProxy = true;
+ if ( (_proxyHost == null) || (_proxyPort <= 0) ) {
+ isProxy = false;
+ _proxyHost = h;
+ _proxyPort = p;
+ }
+
+ if (_log.shouldLog(Log.DEBUG)) _log.debug("Connecting to the server/proxy...");
+ s = new Socket(_proxyHost, _proxyPort);
+ if (_log.shouldLog(Log.DEBUG)) _log.debug("Connected");
+ OutputStream out = s.getOutputStream();
+ String sep = getSeparator();
+ long length = calcContentLength(sep, _fields);
+ if (_log.shouldLog(Log.DEBUG)) _log.debug("Separator: " + sep + " content length: " + length);
+ String header = getHeader(isProxy, path, h, p, sep, length);
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Header: \n" + header);
+ out.write(header.getBytes());
+ out.flush();
+ if (false) {
+ out.write(("--" + sep + CRLF + "content-disposition: form-data; name=\"field1\"" + CRLF + CRLF + "Stuff goes here" + CRLF + "--" + sep + "--" + CRLF).getBytes());
+ } else {
+ sendFields(out, sep, _fields);
+ }
+ out.flush();
+ if (_log.shouldLog(Log.DEBUG)) {
+ BufferedReader in = new BufferedReader(new InputStreamReader(s.getInputStream()));
+ String line = null;
+ while ( (line = in.readLine()) != null) {
+ _log.debug("recv: [" + line + "]");
+ }
+ }
+ out.close();
+ } catch (Exception e) {
+ e.printStackTrace();
+ } finally {
+ if (s != null) try { s.close(); } catch (IOException ioe) {}
+ if (_onCompletion != null)
+ _onCompletion.run();
+ }
+ }
+ }
+
+ private long calcContentLength(String sep, Map fields) {
+ long len = 0;
+ for (Iterator iter = fields.keySet().iterator(); iter.hasNext(); ) {
+ String key = (String)iter.next();
+ Object val = fields.get(key);
+ if (val instanceof File) {
+ File f = (File)val;
+ len += ("--" + sep + CRLF + "Content-Disposition: form-data; name=\"" + key + "\"; filename=\"" + f.getName() + "\"" + CRLF).length();
+ //len += ("Content-length: " + f.length() + "\n").length();
+ len += ("Content-Type: application/octet-stream" + CRLF + CRLF).length();
+ len += f.length();
+ len += CRLF.length(); // nl
+ } else {
+ len += ("--" + sep + CRLF + "Content-Disposition: form-data; name=\"" + key + "\"" + CRLF + CRLF).length();
+ len += val.toString().length();
+ len += CRLF.length(); // nl
+ }
+ }
+ len += 2 + sep.length() + 2 + CRLF.length(); //2 + sep.length() + 2;
+ //len += 2;
+ return len;
+ }
+ private void sendFields(OutputStream out, String separator, Map fields) throws IOException {
+ for (Iterator iter = fields.keySet().iterator(); iter.hasNext(); ) {
+ String field = (String)iter.next();
+ Object val = fields.get(field);
+ if (val instanceof File)
+ sendFile(out, separator, field, (File)val);
+ else
+ sendField(out, separator, field, val.toString());
+ }
+ out.write(("--" + separator + "--" + CRLF).getBytes());
+ }
+
+ private void sendFile(OutputStream out, String separator, String field, File file) throws IOException {
+ long len = file.length();
+ out.write(("--" + separator + CRLF).getBytes());
+ out.write(("Content-Disposition: form-data; name=\"" + field + "\"; filename=\"" + file.getName() + "\"" + CRLF).getBytes());
+ //out.write(("Content-length: " + len + "\n").getBytes());
+ out.write(("Content-Type: application/octet-stream" + CRLF + CRLF).getBytes());
+ FileInputStream in = new FileInputStream(file);
+ byte buf[] = new byte[1024];
+ int read = -1;
+ while ( (read = in.read(buf)) != -1)
+ out.write(buf, 0, read);
+ out.write(CRLF.getBytes());
+ in.close();
+ }
+
+ private void sendField(OutputStream out, String separator, String field, String val) throws IOException {
+ out.write(("--" + separator + CRLF).getBytes());
+ out.write(("Content-Disposition: form-data; name=\"" + field + "\"" + CRLF + CRLF).getBytes());
+ out.write(val.getBytes());
+ out.write(CRLF.getBytes());
+ }
+
+ private String getHeader(boolean isProxy, String path, String host, int port, String separator, long length) {
+ StringBuffer buf = new StringBuffer(512);
+ buf.append("POST ");
+ if (isProxy) {
+ buf.append("http://").append(host);
+ if (port != 80)
+ buf.append(":").append(port);
+ }
+ buf.append(path);
+ buf.append(" HTTP/1.1" + CRLF);
+ buf.append("Host: ").append(host);
+ if (port != 80)
+ buf.append(":").append(port);
+ buf.append(CRLF);
+ buf.append("Connection: close" + CRLF);
+ buf.append("Content-length: ").append(length).append(CRLF);
+ buf.append("Content-type: multipart/form-data, boundary=").append(separator);
+ buf.append(CRLF);
+ buf.append(CRLF);
+ return buf.toString();
+ }
+
+ private String getSeparator() {
+ if (false)
+ return "ABCDEFG";
+ if (false)
+ return "------------------------" + new java.util.Random().nextLong();
+ byte separator[] = new byte[32]; // 2^-128 chance of this being a problem
+ I2PAppContext.getGlobalContext().random().nextBytes(separator);
+ StringBuffer sep = new StringBuffer(48);
+ for (int i = 0; i < separator.length; i++)
+ sep.append((char)((int)'a' + (int)(separator[i]&0x0F))).append((char)((int)'a' + (int)((separator[i] >>> 4) & 0x0F)));
+ return sep.toString();
+ }
+}
diff --git a/src/net/i2p/util/EventDispatcher.java b/src/net/i2p/util/EventDispatcher.java
new file mode 100644
index 0000000..ba7cc43
--- /dev/null
+++ b/src/net/i2p/util/EventDispatcher.java
@@ -0,0 +1,104 @@
+package net.i2p.util;
+
+/*
+ * free (adj.): unencumbered; not under the control of others Written
+ * by human in 2004 and released into the public domain with no
+ * warranty of any kind, either expressed or implied. It probably
+ * won't make your computer catch on fire, or eat your children, but
+ * it might. Use at your own risk.
+ *
+ */
+
+import java.util.Set;
+
+/**
+ * Event dispatching interface. It allows objects to receive and
+ * notify data events (basically String->Object associations) and
+ * create notification chains. To ease the usage of this interface,
+ * you could define an EventDispatcherImpl attribute called
+ * _event
(as suggested in EventDispatcherImpl documentation)
+ * and cut'n'paste the following default implementation:
+ *
+ *
+ * public EventDispatcher getEventDispatcher() { return _event; }
+ * public void attachEventDispatcher(IEventDispatcher e) { _event.attachEventDispatcher(e.getEventDispatcher()); }
+ * public void detachEventDispatcher(IEventDispatcher e) { _event.detachEventDispatcher(e.getEventDispatcher()); }
+ * public void notifyEvent(String e, Object a) { _event.notifyEvent(e,a); }
+ * public Object getEventValue(String n) { return _event.getEventValue(n); }
+ * public Set getEvents() { return _event.getEvents(); }
+ * public void ignoreEvents() { _event.ignoreEvents(); }
+ * public void unIgnoreEvents() { _event.unIgnoreEvents(); }
+ * public Object waitEventValue(String n) { return _event.waitEventValue(n); }
+ *
+ *
+ * @author human
+ */
+public interface EventDispatcher {
+
+ /**
+ * Get an object to be used to deliver events (usually
+ * this
, but YMMV).
+ */
+ public EventDispatcher getEventDispatcher();
+
+ /**
+ * Attach an EventDispatcher object to the events dispatching chain. Note
+ * that notification is not bidirectional (i.e. events notified to
+ * ev
won't reach the object calling this method).
+ * Good luck, and beware of notification loops! :-)
+ *
+ * @param iev Event object to be attached
+ */
+ public void attachEventDispatcher(EventDispatcher iev);
+
+ /**
+ * Detach the specified EventDispatcher object from the events dispatching chain.
+ *
+ * @param iev Event object to be detached
+ */
+ public void detachEventDispatcher(EventDispatcher iev);
+
+ /**
+ * Deliver an event
+ *
+ * @param event name of the event
+ * @param args data being stored for that event
+ */
+ public void notifyEvent(String event, Object args);
+
+ /**
+ * Retrieve the value currently associated with the specified
+ * event value
+ *
+ * @param name name of the event to query for
+ * @return value (or null if none are available)
+ */
+ public Object getEventValue(String name);
+
+ /**
+ * Retrieve the names of all the events that have been received
+ *
+ * @return A set of event names
+ */
+ public Set getEvents();
+
+ /**
+ * Ignore further event notifications
+ *
+ */
+ public void ignoreEvents();
+
+ /**
+ * Almost like the method above :-)
+ *
+ */
+ public void unIgnoreEvents();
+
+ /**
+ * Wait until the given event has received a value
+ *
+ * @param name name of the event to wait for
+ * @return value specified for that event
+ */
+ public Object waitEventValue(String name);
+}
\ No newline at end of file
diff --git a/src/net/i2p/util/EventDispatcherImpl.java b/src/net/i2p/util/EventDispatcherImpl.java
new file mode 100644
index 0000000..b82682b
--- /dev/null
+++ b/src/net/i2p/util/EventDispatcherImpl.java
@@ -0,0 +1,142 @@
+package net.i2p.util;
+
+/*
+ * free (adj.): unencumbered; not under the control of others Written
+ * by human & jrandom in 2004 and released into the public domain with
+ * no warranty of any kind, either expressed or implied. It probably
+ * won't make your computer catch on fire, or eat your children, but
+ * it might. Use at your own risk.
+ *
+ */
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.ListIterator;
+import java.util.Set;
+
+/**
+ * An implementation of the EventDispatcher interface. Since Java
+ * doesn't support multiple inheritance, you could follow the Log.java
+ * style: this class should be instantiated and kept as a variable by
+ * each object it is used by, ala:
+ * private final EventDispatcher _event = new EventDispatcher();
+ *
+ * If there is anything in here that doesn't make sense, turn off
+ * your computer and go fly a kite - (c) 2004 by jrandom
+
+ * @author human
+ * @author jrandom
+ */
+public class EventDispatcherImpl implements EventDispatcher {
+
+ private final static Log _log = new Log(EventDispatcherImpl.class);
+
+ private boolean _ignore = false;
+ private HashMap _events = new HashMap(4);
+ private ArrayList _attached = new ArrayList();
+
+ public EventDispatcher getEventDispatcher() {
+ return this;
+ }
+
+ public void attachEventDispatcher(EventDispatcher ev) {
+ if (ev == null) return;
+ synchronized (_attached) {
+ _log.debug(this.hashCode() + ": attaching EventDispatcher " + ev.hashCode());
+ _attached.add(ev);
+ }
+ }
+
+ public void detachEventDispatcher(EventDispatcher ev) {
+ if (ev == null) return;
+ synchronized (_attached) {
+ ListIterator it = _attached.listIterator();
+ while (it.hasNext()) {
+ if (((EventDispatcher) it.next()) == ev) {
+ it.remove();
+ break;
+ }
+ }
+ }
+ }
+
+ public void notifyEvent(String eventName, Object args) {
+ if (_ignore) return;
+ if (args == null) {
+ args = "[null value]";
+ }
+ _log.debug(this.hashCode() + ": got notification [" + eventName + "] = [" + args + "]");
+ synchronized (_events) {
+ _events.put(eventName, args);
+ _events.notifyAll();
+ synchronized (_attached) {
+ Iterator it = _attached.iterator();
+ EventDispatcher e;
+ while (it.hasNext()) {
+ e = (EventDispatcher) it.next();
+ _log.debug(this.hashCode() + ": notifying attached EventDispatcher " + e.hashCode() + ": ["
+ + eventName + "] = [" + args + "]");
+ e.notifyEvent(eventName, args);
+ }
+ }
+ }
+ }
+
+ public Object getEventValue(String name) {
+ if (_ignore) return null;
+ Object val;
+
+ synchronized (_events) {
+ val = _events.get(name);
+ }
+
+ return val;
+ }
+
+ public Set getEvents() {
+ if (_ignore) return Collections.EMPTY_SET;
+ Set set;
+
+ synchronized (_events) {
+ set = new HashSet(_events.keySet());
+ }
+
+ return set;
+ }
+
+ public void ignoreEvents() {
+ _ignore = true;
+ synchronized (_events) {
+ _events.clear();
+ }
+ _events = null;
+ }
+
+ public void unIgnoreEvents() {
+ _ignore = false;
+ }
+
+ public Object waitEventValue(String name) {
+ if (_ignore) return null;
+ Object val;
+
+ _log.debug(this.hashCode() + ": waiting for [" + name + "]");
+ do {
+ synchronized (_events) {
+ if (_events.containsKey(name)) {
+ val = _events.get(name);
+ break;
+ }
+ try {
+ _events.wait(1 * 1000);
+ } catch (InterruptedException e) { // nop
+ }
+ }
+ } while (true);
+
+ return val;
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/util/FortunaRandomSource.java b/src/net/i2p/util/FortunaRandomSource.java
new file mode 100644
index 0000000..2d1a691
--- /dev/null
+++ b/src/net/i2p/util/FortunaRandomSource.java
@@ -0,0 +1,224 @@
+package net.i2p.util;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.security.SecureRandom;
+
+import net.i2p.I2PAppContext;
+import net.i2p.crypto.EntropyHarvester;
+
+import gnu.crypto.prng.AsyncFortunaStandalone;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.IOException;
+
+/**
+ * Wrapper around GNU-Crypto's Fortuna PRNG. This seeds from /dev/urandom and
+ * ./prngseed.rnd on startup (if they exist), writing a new seed to ./prngseed.rnd
+ * on an explicit call to saveSeed().
+ *
+ */
+public class FortunaRandomSource extends RandomSource implements EntropyHarvester {
+ private AsyncFortunaStandalone _fortuna;
+ private double _nextGaussian;
+ private boolean _haveNextGaussian;
+
+ public FortunaRandomSource(I2PAppContext context) {
+ super(context);
+ _fortuna = new AsyncFortunaStandalone();
+ byte seed[] = new byte[1024];
+ if (initSeed(seed)) {
+ _fortuna.seed(seed);
+ } else {
+ SecureRandom sr = new SecureRandom();
+ sr.nextBytes(seed);
+ _fortuna.seed(seed);
+ }
+ _fortuna.startup();
+ // kickstart it
+ _fortuna.nextBytes(seed);
+ _haveNextGaussian = false;
+ }
+
+ public synchronized void setSeed(byte buf[]) {
+ _fortuna.addRandomBytes(buf);
+ }
+
+ /**
+ * According to the java docs (http://java.sun.com/j2se/1.4.1/docs/api/java/util/Random.html#nextInt(int))
+ * nextInt(n) should return a number between 0 and n (including 0 and excluding n). However, their pseudocode,
+ * as well as sun's, kaffe's, and classpath's implementation INCLUDES NEGATIVE VALUES.
+ * WTF. Ok, so we're going to have it return between 0 and n (including 0, excluding n), since
+ * thats what it has been used for.
+ *
+ */
+ public int nextInt(int n) {
+ if (n == 0) return 0;
+ int rv = signedNextInt(n);
+ if (rv < 0)
+ rv = 0 - rv;
+ rv %= n;
+ return rv;
+ }
+
+ public int nextInt() { return signedNextInt(Integer.MAX_VALUE); }
+
+ /**
+ * Implementation from Sun's java.util.Random javadocs
+ */
+ private int signedNextInt(int n) {
+ if (n<=0)
+ throw new IllegalArgumentException("n must be positive");
+
+ ////
+ // this shortcut from sun's docs neither works nor is necessary.
+ //
+ //if ((n & -n) == n) {
+ // // i.e., n is a power of 2
+ // return (int)((n * (long)nextBits(31)) >> 31);
+ //}
+
+ int numBits = 0;
+ int remaining = n;
+ int rv = 0;
+ while (remaining > 0) {
+ remaining >>= 1;
+ rv += nextBits(8) << numBits*8;
+ numBits++;
+ }
+ if (rv < 0)
+ rv += n;
+ return rv % n;
+
+ //int bits, val;
+ //do {
+ // bits = nextBits(31);
+ // val = bits % n;
+ //} while(bits - val + (n-1) < 0);
+ //
+ //return val;
+ }
+
+ /**
+ * Like the modified nextInt, nextLong(n) returns a random number from 0 through n,
+ * including 0, excluding n.
+ */
+ public long nextLong(long n) {
+ if (n == 0) return 0;
+ long rv = signedNextLong(n);
+ if (rv < 0)
+ rv = 0 - rv;
+ rv %= n;
+ return rv;
+ }
+
+ public long nextLong() { return signedNextLong(Long.MAX_VALUE); }
+
+ /**
+ * Implementation from Sun's java.util.Random javadocs
+ */
+ private long signedNextLong(long n) {
+ return ((long)nextBits(32) << 32) + nextBits(32);
+ }
+
+ public synchronized boolean nextBoolean() {
+ // wasteful, might be worth caching the boolean byte later
+ byte val = _fortuna.nextByte();
+ return ((val & 0x01) == 1);
+ }
+
+ public synchronized void nextBytes(byte buf[]) {
+ _fortuna.nextBytes(buf);
+ }
+
+ /**
+ * Implementation from sun's java.util.Random javadocs
+ */
+ public double nextDouble() {
+ return (((long)nextBits(26) << 27) + nextBits(27)) / (double)(1L << 53);
+ }
+ /**
+ * Implementation from sun's java.util.Random javadocs
+ */
+ public float nextFloat() {
+ return nextBits(24) / ((float)(1 << 24));
+ }
+ /**
+ * Implementation from sun's java.util.Random javadocs
+ */
+ public synchronized double nextGaussian() {
+ if (_haveNextGaussian) {
+ _haveNextGaussian = false;
+ return _nextGaussian;
+ } else {
+ double v1, v2, s;
+ do {
+ v1 = 2 * nextDouble() - 1; // between -1.0 and 1.0
+ v2 = 2 * nextDouble() - 1; // between -1.0 and 1.0
+ s = v1 * v1 + v2 * v2;
+ } while (s >= 1 || s == 0);
+ double multiplier = Math.sqrt(-2 * Math.log(s)/s);
+ _nextGaussian = v2 * multiplier;
+ _haveNextGaussian = true;
+ return v1 * multiplier;
+ }
+ }
+
+ /**
+ * Pull the next numBits of random data off the fortuna instance (returning -2^numBits-1
+ * through 2^numBits-1
+ */
+ protected synchronized int nextBits(int numBits) {
+ long rv = 0;
+ int bytes = (numBits + 7) / 8;
+ for (int i = 0; i < bytes; i++)
+ rv += ((_fortuna.nextByte() & 0xFF) << i*8);
+ //rv >>>= (64-numBits);
+ if (rv < 0)
+ rv = 0 - rv;
+ int off = 8*bytes - numBits;
+ rv >>>= off;
+ return (int)rv;
+ }
+
+ public EntropyHarvester harvester() { return this; }
+
+ /** reseed the fortuna */
+ public synchronized void feedEntropy(String source, long data, int bitoffset, int bits) {
+ _fortuna.addRandomByte((byte)(data & 0xFF));
+ }
+
+ /** reseed the fortuna */
+ public synchronized void feedEntropy(String source, byte[] data, int offset, int len) {
+ _fortuna.addRandomBytes(data, offset, len);
+ }
+
+ public static void main(String args[]) {
+ try {
+ RandomSource rand = I2PAppContext.getGlobalContext().random();
+ if (true) {
+ for (int i = 0; i < 1000; i++)
+ if (rand.nextFloat() < 0)
+ throw new RuntimeException("negative!");
+ System.out.println("All positive");
+ return;
+ }
+ java.io.ByteArrayOutputStream baos = new java.io.ByteArrayOutputStream();
+ java.util.zip.GZIPOutputStream gos = new java.util.zip.GZIPOutputStream(baos);
+ for (int i = 0; i < 1024*1024; i++) {
+ int c = rand.nextInt(256);
+ gos.write((byte)c);
+ }
+ gos.finish();
+ byte compressed[] = baos.toByteArray();
+ System.out.println("Compressed size of 1MB: " + compressed.length);
+ } catch (Exception e) { e.printStackTrace(); }
+ }
+}
diff --git a/src/net/i2p/util/HexDump.java b/src/net/i2p/util/HexDump.java
new file mode 100644
index 0000000..0d56d2f
--- /dev/null
+++ b/src/net/i2p/util/HexDump.java
@@ -0,0 +1,135 @@
+package net.i2p.util;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by human in 2004 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+
+/**
+ * Hexdump class (well, it's actually a namespace with some functions,
+ * but let's stick with java terminology :-). These methods generate
+ * an output that resembles `hexdump -C` (Windows users: do you
+ * remember `debug` in the DOS age?).
+ *
+ * @author human
+ */
+public class HexDump {
+
+ private static final int FORMAT_OFFSET_PADDING = 8;
+ private static final int FORMAT_BYTES_PER_ROW = 16;
+ private static final byte[] HEXCHARS = "0123456789abcdef".getBytes();
+
+ /**
+ * Dump a byte array in a String.
+ *
+ * @param data Data to be dumped
+ */
+ public static String dump(byte[] data) {
+ ByteArrayOutputStream out = new ByteArrayOutputStream();
+
+ try {
+ dump(data, 0, data.length, out);
+ } catch (IOException e) {
+ e.printStackTrace();
+ }
+
+ return out.toString();
+ }
+
+ /**
+ * Dump a byte array in a String.
+ *
+ * @param data Data to be dumped
+ * @param off Offset from the beginning of data
+ * @param len Number of bytes of data
to be dumped
+ */
+ public static String dump(byte[] data, int off, int len) {
+ ByteArrayOutputStream out = new ByteArrayOutputStream();
+
+ try {
+ dump(data, off, len, out);
+ } catch (IOException e) {
+ e.printStackTrace();
+ }
+
+ return out.toString();
+ }
+
+ /**
+ * Dump a byte array through a stream.
+ *
+ * @param data Data to be dumped
+ * @param out Output stream
+ */
+ public static void dump(byte data[], OutputStream out) throws IOException {
+ dump(data, 0, data.length, out);
+ }
+
+ /**
+ * Dump a byte array through a stream.
+ *
+ * @param data Data to be dumped
+ * @param off Offset from the beginning of data
+ * @param len Number of bytes of data
to be dumped
+ * @param out Output stream
+ */
+ public static void dump(byte[] data, int off, int len, OutputStream out) throws IOException {
+ String hexoff;
+ int dumpoff, hexofflen, i, nextbytes, end = len + off;
+ int val;
+
+ for (dumpoff = off; dumpoff < end; dumpoff += FORMAT_BYTES_PER_ROW) {
+ // Pad the offset with 0's (i miss my beloved sprintf()...)
+ hexoff = Integer.toString(dumpoff, 16);
+ hexofflen = hexoff.length();
+ for (i = 0; i < FORMAT_OFFSET_PADDING - hexofflen; ++i) {
+ hexoff = "0" + hexoff;
+ }
+ out.write((hexoff + " ").getBytes());
+
+ // Bytes to be printed in the current line
+ nextbytes = (FORMAT_BYTES_PER_ROW < (end - dumpoff) ? FORMAT_BYTES_PER_ROW : (end - dumpoff));
+
+ for (i = 0; i < FORMAT_BYTES_PER_ROW; ++i) {
+ // Put two spaces to separate 8-bytes blocks
+ if ((i % 8) == 0) {
+ out.write(" ".getBytes());
+ }
+ if (i >= nextbytes) {
+ out.write(" ".getBytes());
+ } else {
+ val = data[dumpoff + i] & 0xff;
+ out.write(HEXCHARS[val >>> 4]);
+ out.write(HEXCHARS[val & 0xf]);
+ out.write(" ".getBytes());
+ }
+ }
+
+ out.write(" |".getBytes());
+
+ for (i = 0; i < FORMAT_BYTES_PER_ROW; ++i) {
+ if (i >= nextbytes) {
+ out.write(" ".getBytes());
+ } else {
+ val = data[i + dumpoff];
+ // Is it a printable character?
+ if ((val > 31) && (val < 127)) {
+ out.write(val);
+ } else {
+ out.write(".".getBytes());
+ }
+ }
+ }
+
+ out.write("|\n".getBytes());
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/util/I2PThread.java b/src/net/i2p/util/I2PThread.java
new file mode 100644
index 0000000..f33d6f0
--- /dev/null
+++ b/src/net/i2p/util/I2PThread.java
@@ -0,0 +1,122 @@
+package net.i2p.util;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.Set;
+
+/**
+ * In case its useful later...
+ * (e.g. w/ native programatic thread dumping, etc)
+ *
+ */
+public class I2PThread extends Thread {
+ private static volatile Log _log;
+ private static Set _listeners = new HashSet(4);
+ private String _name;
+ private Exception _createdBy;
+
+ public I2PThread() {
+ super();
+ if ( (_log == null) || (_log.shouldLog(Log.DEBUG)) )
+ _createdBy = new Exception("Created by");
+ }
+
+ public I2PThread(String name) {
+ super(name);
+ if ( (_log == null) || (_log.shouldLog(Log.DEBUG)) )
+ _createdBy = new Exception("Created by");
+ }
+
+ public I2PThread(Runnable r) {
+ super(r);
+ if ( (_log == null) || (_log.shouldLog(Log.DEBUG)) )
+ _createdBy = new Exception("Created by");
+ }
+
+ public I2PThread(Runnable r, String name) {
+ super(r, name);
+ if ( (_log == null) || (_log.shouldLog(Log.DEBUG)) )
+ _createdBy = new Exception("Created by");
+ }
+ public I2PThread(Runnable r, String name, boolean isDaemon) {
+ super(r, name);
+ setDaemon(isDaemon);
+ if ( (_log == null) || (_log.shouldLog(Log.DEBUG)) )
+ _createdBy = new Exception("Created by");
+ }
+
+ private void log(int level, String msg) { log(level, msg, null); }
+ private void log(int level, String msg, Throwable t) {
+ // we cant assume log is created
+ if (_log == null) _log = new Log(I2PThread.class);
+ if (_log.shouldLog(level))
+ _log.log(level, msg, t);
+ }
+
+ public void run() {
+ _name = Thread.currentThread().getName();
+ log(Log.DEBUG, "New thread started: " + _name, _createdBy);
+ try {
+ super.run();
+ } catch (Throwable t) {
+ try {
+ log(Log.CRIT, "Killing thread " + getName(), t);
+ } catch (Throwable woof) {
+ System.err.println("Died within the OOM itself");
+ t.printStackTrace();
+ }
+ if (t instanceof OutOfMemoryError)
+ fireOOM((OutOfMemoryError)t);
+ }
+ log(Log.DEBUG, "Thread finished gracefully: " + _name);
+ }
+
+ protected void finalize() throws Throwable {
+ log(Log.DEBUG, "Thread finalized: " + _name);
+ super.finalize();
+ }
+
+ private void fireOOM(OutOfMemoryError oom) {
+ for (Iterator iter = _listeners.iterator(); iter.hasNext(); ) {
+ OOMEventListener listener = (OOMEventListener)iter.next();
+ listener.outOfMemory(oom);
+ }
+ }
+
+ /** register a new component that wants notification of OOM events */
+ public static void addOOMEventListener(OOMEventListener lsnr) {
+ _listeners.add(lsnr);
+ }
+
+ /** unregister a component that wants notification of OOM events */
+ public static void removeOOMEventListener(OOMEventListener lsnr) {
+ _listeners.remove(lsnr);
+ }
+
+ public interface OOMEventListener {
+ public void outOfMemory(OutOfMemoryError err);
+ }
+
+ public static void main(String args[]) {
+ I2PThread t = new I2PThread(new Runnable() {
+ public void run() {
+ throw new NullPointerException("blah");
+ }
+ });
+ t.start();
+ try {
+ Thread.sleep(10000);
+ } catch (Throwable tt) { // nop
+ }
+ }
+}
diff --git a/src/net/i2p/util/Log.java b/src/net/i2p/util/Log.java
new file mode 100644
index 0000000..7293c51
--- /dev/null
+++ b/src/net/i2p/util/Log.java
@@ -0,0 +1,201 @@
+package net.i2p.util;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import net.i2p.I2PAppContext;
+
+/**
+ * Wrapper class for whatever logging system I2P uses. This class should be
+ * instantiated and kept as a variable for each class it is used by, ala:
+ * private final static Log _log = new Log(MyClassName.class);
+ *
+ * If there is anything in here that doesn't make sense, turn off your computer
+ * and go fly a kite.
+ *
+ *
+ * @author jrandom
+ */
+public class Log {
+ private Class _class;
+ private String _className;
+ private String _name;
+ private int _minPriority;
+ private LogScope _scope;
+ private LogManager _manager;
+
+ public final static int DEBUG = 10;
+ public final static int INFO = 20;
+ public final static int WARN = 30;
+ public final static int ERROR = 40;
+ public final static int CRIT = 50;
+
+ public final static String STR_DEBUG = "DEBUG";
+ public final static String STR_INFO = "INFO";
+ public final static String STR_WARN = "WARN";
+ public final static String STR_ERROR = "ERROR";
+ public final static String STR_CRIT = "CRIT";
+
+ public static int getLevel(String level) {
+ if (level == null) return Log.CRIT;
+ level = level.toUpperCase();
+ if (STR_DEBUG.startsWith(level)) return DEBUG;
+ if (STR_INFO.startsWith(level)) return INFO;
+ if (STR_WARN.startsWith(level)) return WARN;
+ if (STR_ERROR.startsWith(level)) return ERROR;
+ if (STR_CRIT.startsWith(level)) return CRIT;
+ return CRIT;
+ }
+
+ public static String toLevelString(int level) {
+ switch (level) {
+ case DEBUG:
+ return STR_DEBUG;
+ case INFO:
+ return STR_INFO;
+ case WARN:
+ return STR_WARN;
+ case ERROR:
+ return STR_ERROR;
+ case CRIT:
+ return STR_CRIT;
+ }
+ return (level > CRIT ? STR_CRIT : STR_DEBUG);
+ }
+
+ public Log(Class cls) {
+ this(I2PAppContext.getGlobalContext().logManager(), cls, null);
+ _manager.addLog(this);
+ }
+
+ public Log(String name) {
+ this(I2PAppContext.getGlobalContext().logManager(), null, name);
+ _manager.addLog(this);
+ }
+
+ Log(LogManager manager, Class cls) {
+ this(manager, cls, null);
+ }
+
+ Log(LogManager manager, String name) {
+ this(manager, null, name);
+ }
+
+ Log(LogManager manager, Class cls, String name) {
+ _manager = manager;
+ _class = cls;
+ _className = cls != null ? cls.getName() : null;
+ _name = name;
+ _minPriority = DEBUG;
+ _scope = new LogScope(name, cls);
+ //_manager.addRecord(new LogRecord(Log.class, null, Thread.currentThread().getName(), Log.DEBUG,
+ // "Log created with manager " + manager + " for class " + cls, null));
+ }
+
+ public void log(int priority, String msg) {
+ if (priority >= _minPriority) {
+ _manager.addRecord(new LogRecord(_class, _name,
+ Thread.currentThread().getName(), priority,
+ msg, null));
+ }
+ }
+
+ public void log(int priority, String msg, Throwable t) {
+ if (priority >= _minPriority) {
+ _manager.addRecord(new LogRecord(_class, _name,
+ Thread.currentThread().getName(), priority,
+ msg, t));
+ }
+ }
+
+ public void debug(String msg) {
+ log(DEBUG, msg);
+ }
+
+ public void debug(String msg, Throwable t) {
+ log(DEBUG, msg, t);
+ }
+
+ public void info(String msg) {
+ log(INFO, msg);
+ }
+
+ public void info(String msg, Throwable t) {
+ log(INFO, msg, t);
+ }
+
+ public void warn(String msg) {
+ log(WARN, msg);
+ }
+
+ public void warn(String msg, Throwable t) {
+ log(WARN, msg, t);
+ }
+
+ public void error(String msg) {
+ log(ERROR, msg);
+ }
+
+ public void error(String msg, Throwable t) {
+ log(ERROR, msg, t);
+ }
+
+ public int getMinimumPriority() {
+ return _minPriority;
+ }
+
+ public void setMinimumPriority(int priority) {
+ _minPriority = priority;
+ //_manager.addRecord(new LogRecord(Log.class, null, Thread.currentThread().getName(), Log.DEBUG,
+ // "Log with manager " + _manager + " for class " + _class
+ // + " new priority " + toLevelString(priority), null));
+ }
+
+ public boolean shouldLog(int priority) {
+ return priority >= _minPriority;
+ }
+
+ public String getName() {
+ if (_className != null) return _className;
+
+ return _name;
+ }
+
+ public Object getScope() { return _scope; }
+ static String getScope(String name, Class cls) {
+ if ( (name == null) && (cls == null) ) return "f00";
+ if (cls == null) return name;
+ if (name == null) return cls.getName();
+ return name + "" + cls.getName();
+ }
+ private static final class LogScope {
+ private String _scopeName;
+ private Class _scopeClass;
+ private String _scopeCache;
+ public LogScope(String name, Class cls) {
+ _scopeName = name;
+ _scopeClass = cls;
+ _scopeCache = getScope(name, cls);
+ }
+ public int hashCode() {
+ return _scopeCache.hashCode();
+ }
+ public boolean equals(Object obj) {
+ if (obj == null) throw new NullPointerException("Null object scope?");
+ if (obj instanceof LogScope) {
+ LogScope s = (LogScope)obj;
+ return s._scopeCache.equals(_scopeCache);
+ } else if (obj instanceof String) {
+ return obj.equals(_scopeCache);
+ }
+
+ return false;
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/util/LogConsoleBuffer.java b/src/net/i2p/util/LogConsoleBuffer.java
new file mode 100644
index 0000000..e1d896d
--- /dev/null
+++ b/src/net/i2p/util/LogConsoleBuffer.java
@@ -0,0 +1,62 @@
+package net.i2p.util;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import net.i2p.I2PAppContext;
+
+/**
+ * Offer a glimpse into the last few console messages generated
+ *
+ */
+public class LogConsoleBuffer {
+ private I2PAppContext _context;
+ private List _buffer;
+ private List _critBuffer;
+
+ public LogConsoleBuffer(I2PAppContext context) {
+ _context = context;
+ _buffer = new ArrayList();
+ _critBuffer = new ArrayList();
+ }
+
+ void add(String msg) {
+ int lim = _context.logManager().getConsoleBufferSize();
+ synchronized (_buffer) {
+ while (_buffer.size() >= lim)
+ _buffer.remove(0);
+ _buffer.add(msg);
+ }
+ }
+ void addCritical(String msg) {
+ int lim = _context.logManager().getConsoleBufferSize();
+ synchronized (_critBuffer) {
+ while (_critBuffer.size() >= lim)
+ _critBuffer.remove(0);
+ _critBuffer.add(msg);
+ }
+ }
+
+ /**
+ * Retrieve the currently bufferd messages, earlier values were generated...
+ * earlier. All values are strings with no formatting (as they are written
+ * in the logs)
+ *
+ */
+ public List getMostRecentMessages() {
+ synchronized (_buffer) {
+ return new ArrayList(_buffer);
+ }
+ }
+ /**
+ * Retrieve the currently bufferd crutucak messages, earlier values were generated...
+ * earlier. All values are strings with no formatting (as they are written
+ * in the logs)
+ *
+ */
+ public List getMostRecentCriticalMessages() {
+ synchronized (_critBuffer) {
+ return new ArrayList(_critBuffer);
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/util/LogLimit.java b/src/net/i2p/util/LogLimit.java
new file mode 100644
index 0000000..55bd074
--- /dev/null
+++ b/src/net/i2p/util/LogLimit.java
@@ -0,0 +1,42 @@
+package net.i2p.util;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+/**
+ * Defines the log limit for a particular set of logs
+ *
+ */
+class LogLimit {
+ private String _rootName;
+ private int _limit;
+
+ public LogLimit(String name, int limit) {
+ _rootName = name;
+ _limit = limit;
+ }
+
+ public String getRootName() {
+ return _rootName;
+ }
+
+ public int getLimit() {
+ return _limit;
+ }
+
+ public void setLimit(int limit) {
+ _limit = limit;
+ }
+
+ public boolean matches(Log log) {
+ String name = log.getName();
+ if (name == null) return false;
+ return name.startsWith(_rootName);
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/util/LogManager.java b/src/net/i2p/util/LogManager.java
new file mode 100644
index 0000000..b0bd46e
--- /dev/null
+++ b/src/net/i2p/util/LogManager.java
@@ -0,0 +1,662 @@
+package net.i2p.util;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.text.SimpleDateFormat;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.TreeMap;
+
+import net.i2p.I2PAppContext;
+
+/**
+ * Manages the logging system, loading (and reloading) the configuration file,
+ * coordinating the log limits, and storing the set of log records pending.
+ * This also fires off a LogWriter thread that pulls pending records off and
+ * writes them where appropriate.
+ *
+ */
+public class LogManager {
+ public final static String CONFIG_LOCATION_PROP = "loggerConfigLocation";
+ public final static String FILENAME_OVERRIDE_PROP = "loggerFilenameOverride";
+ public final static String CONFIG_LOCATION_DEFAULT = "logger.config";
+ /**
+ * These define the characters in the format line of the config file
+ */
+ public static final char DATE = 'd', CLASS = 'c', THREAD = 't', PRIORITY = 'p', MESSAGE = 'm';
+
+ public final static String PROP_FORMAT = "logger.format";
+ public final static String PROP_DATEFORMAT = "logger.dateFormat";
+ public final static String PROP_FILENAME = "logger.logFileName";
+ public final static String PROP_FILESIZE = "logger.logFileSize";
+ public final static String PROP_ROTATIONLIMIT = "logger.logRotationLimit";
+ public final static String PROP_DISPLAYONSCREEN = "logger.displayOnScreen";
+ public final static String PROP_CONSOLEBUFFERSIZE = "logger.consoleBufferSize";
+ public final static String PROP_DISPLAYONSCREENLEVEL = "logger.minimumOnScreenLevel";
+ public final static String PROP_DEFAULTLEVEL = "logger.defaultLevel";
+ public final static String PROP_RECORD_PREFIX = "logger.record.";
+
+ public final static String DEFAULT_FORMAT = DATE + " " + PRIORITY + " [" + THREAD + "] " + CLASS + ": " + MESSAGE;
+ public final static String DEFAULT_DATEFORMAT = "HH:mm:ss.SSS";
+ public final static String DEFAULT_FILENAME = "logs/log-#.txt";
+ public final static String DEFAULT_FILESIZE = "10m";
+ public final static boolean DEFAULT_DISPLAYONSCREEN = true;
+ public final static int DEFAULT_CONSOLEBUFFERSIZE = 20;
+ public final static String DEFAULT_ROTATIONLIMIT = "2";
+ public final static String DEFAULT_DEFAULTLEVEL = Log.STR_ERROR;
+ public final static String DEFAULT_ONSCREENLEVEL = Log.STR_CRIT;
+
+ private I2PAppContext _context;
+ private Log _log;
+
+ /** when was the config file last read (or -1 if never) */
+ private long _configLastRead;
+
+ /** filename of the config file */
+ private String _location;
+ /** Ordered list of LogRecord elements that have not been written out yet */
+ private List _records;
+ /** List of explicit overrides of log levels (LogLimit objects) */
+ private List _limits;
+ /** String (scope) to Log object */
+ private Map _logs;
+ /** who clears and writes our records */
+ private LogWriter _writer;
+
+ /**
+ * default log level for logs that aren't explicitly controlled
+ * through a LogLimit in _limits
+ */
+ private int _defaultLimit;
+ /** Log record format string */
+ private char[] _format;
+ /** Date format instance */
+ private SimpleDateFormat _dateFormat;
+ /** Date format string (for the SimpleDateFormat instance) */
+ private String _dateFormatPattern;
+ /** log filename pattern */
+ private String _baseLogfilename;
+ /** max # bytes in the logfile before rotation */
+ private int _fileSize;
+ /** max # rotated logs */
+ private int _rotationLimit;
+ /** minimum log level to be displayed on stdout */
+ private int _onScreenLimit;
+
+ /** whether or not we even want to display anything on stdout */
+ private boolean _displayOnScreen;
+ /** how many records we want to buffer in the "recent logs" list */
+ private int _consoleBufferSize;
+ /** the actual "recent logs" list */
+ private LogConsoleBuffer _consoleBuffer;
+
+ private boolean _alreadyNoticedMissingConfig;
+
+ public LogManager(I2PAppContext context) {
+ _displayOnScreen = true;
+ _alreadyNoticedMissingConfig = false;
+ _records = new ArrayList();
+ _limits = new ArrayList(128);
+ _logs = new HashMap(128);
+ _defaultLimit = Log.ERROR;
+ _configLastRead = 0;
+ _location = context.getProperty(CONFIG_LOCATION_PROP, CONFIG_LOCATION_DEFAULT);
+ _context = context;
+ _log = getLog(LogManager.class);
+ _consoleBuffer = new LogConsoleBuffer(context);
+ loadConfig();
+ _writer = new LogWriter(this);
+ Thread t = new I2PThread(_writer);
+ t.setName("LogWriter");
+ t.setDaemon(true);
+ t.start();
+ try {
+ Runtime.getRuntime().addShutdownHook(new ShutdownHook());
+ } catch (IllegalStateException ise) {
+ // shutdown in progress, fsck it
+ }
+ //System.out.println("Created logManager " + this + " with context: " + context);
+ }
+
+ private LogManager() { // nop
+ }
+
+ public Log getLog(Class cls) { return getLog(cls, null); }
+ public Log getLog(String name) { return getLog(null, name); }
+ public Log getLog(Class cls, String name) {
+ Log rv = null;
+ String scope = Log.getScope(name, cls);
+ boolean isNew = false;
+ synchronized (_logs) {
+ rv = (Log)_logs.get(scope);
+ if (rv == null) {
+ rv = new Log(this, cls, name);
+ _logs.put(scope, rv);
+ isNew = true;
+ }
+ }
+ if (isNew)
+ updateLimit(rv);
+ return rv;
+ }
+ public List getLogs() {
+ List rv = null;
+ synchronized (_logs) {
+ rv = new ArrayList(_logs.values());
+ }
+ return rv;
+ }
+ void addLog(Log log) {
+ synchronized (_logs) {
+ if (!_logs.containsKey(log.getScope()))
+ _logs.put(log.getScope(), log);
+ }
+ updateLimit(log);
+ }
+
+ public LogConsoleBuffer getBuffer() { return _consoleBuffer; }
+
+ public void setDisplayOnScreen(boolean yes) {
+ _displayOnScreen = yes;
+ }
+
+ public boolean displayOnScreen() {
+ return _displayOnScreen;
+ }
+
+ public int getDisplayOnScreenLevel() {
+ return _onScreenLimit;
+ }
+
+ public void setDisplayOnScreenLevel(int level) {
+ _onScreenLimit = level;
+ }
+
+ public int getConsoleBufferSize() {
+ return _consoleBufferSize;
+ }
+
+ public void setConsoleBufferSize(int numRecords) {
+ _consoleBufferSize = numRecords;
+ }
+
+ public void setConfig(String filename) {
+ _log.debug("Config filename set to " + filename);
+ _location = filename;
+ loadConfig();
+ }
+
+ /**
+ * Used by Log to add records to the queue
+ *
+ */
+ void addRecord(LogRecord record) {
+ int numRecords = 0;
+ synchronized (_records) {
+ _records.add(record);
+ numRecords = _records.size();
+ }
+
+ if (numRecords > 100) {
+ // the writer waits 10 seconds *or* until we tell them to wake up
+ // before rereading the config and writing out any log messages
+ synchronized (_writer) {
+ _writer.notifyAll();
+ }
+ }
+ }
+
+ /**
+ * Called periodically by the log writer's thread
+ *
+ */
+ void rereadConfig() {
+ // perhaps check modification time
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Rereading configuration file");
+ loadConfig();
+ }
+
+ ///
+ ///
+
+ //
+ //
+ //
+
+ private void loadConfig() {
+ File cfgFile = new File(_location);
+ if (!cfgFile.exists()) {
+ if (!_alreadyNoticedMissingConfig) {
+ if (_log.shouldLog(Log.WARN))
+ _log.warn("Log file " + _location + " does not exist");
+ //System.err.println("Log file " + _location + " does not exist");
+ _alreadyNoticedMissingConfig = true;
+ }
+ parseConfig(new Properties());
+ updateLimits();
+ return;
+ }
+ _alreadyNoticedMissingConfig = false;
+
+ if ((_configLastRead > 0) && (_configLastRead >= cfgFile.lastModified())) {
+ if (_log.shouldLog(Log.INFO))
+ _log.info("Short circuiting config read (last read: "
+ + (_context.clock().now() - _configLastRead) + "ms ago, config file modified "
+ + (_context.clock().now() - cfgFile.lastModified()) + "ms ago");
+ return;
+ }
+
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Loading config from " + _location);
+
+ Properties p = new Properties();
+ FileInputStream fis = null;
+ try {
+ fis = new FileInputStream(cfgFile);
+ p.load(fis);
+ _configLastRead = _context.clock().now();
+ } catch (IOException ioe) {
+ System.err.println("Error loading logger config from " + new File(_location).getAbsolutePath());
+ } finally {
+ if (fis != null) try {
+ fis.close();
+ } catch (IOException ioe) { // nop
+ }
+ }
+ parseConfig(p);
+ updateLimits();
+ }
+
+ private void parseConfig(Properties config) {
+ String fmt = config.getProperty(PROP_FORMAT, DEFAULT_FORMAT);
+ _format = fmt.toCharArray();
+
+ String df = config.getProperty(PROP_DATEFORMAT, DEFAULT_DATEFORMAT);
+ _dateFormatPattern = df;
+ _dateFormat = new SimpleDateFormat(df);
+
+ String disp = config.getProperty(PROP_DISPLAYONSCREEN);
+ if (disp == null)
+ _displayOnScreen = DEFAULT_DISPLAYONSCREEN;
+ else {
+ if ("TRUE".equals(disp.toUpperCase().trim()))
+ _displayOnScreen = true;
+ else if ("YES".equals(disp.toUpperCase().trim()))
+ _displayOnScreen = true;
+ else
+ _displayOnScreen = false;
+ }
+
+ String filenameOverride = _context.getProperty(FILENAME_OVERRIDE_PROP);
+ if (filenameOverride != null)
+ _baseLogfilename = filenameOverride;
+ else
+ _baseLogfilename = config.getProperty(PROP_FILENAME, DEFAULT_FILENAME);
+
+ _fileSize = getFileSize(config.getProperty(PROP_FILESIZE, DEFAULT_FILESIZE));
+ _rotationLimit = -1;
+ try {
+ String str = config.getProperty(PROP_ROTATIONLIMIT);
+ _rotationLimit = Integer.parseInt(config.getProperty(PROP_ROTATIONLIMIT, DEFAULT_ROTATIONLIMIT));
+ } catch (NumberFormatException nfe) {
+ System.err.println("Invalid rotation limit");
+ nfe.printStackTrace();
+ }
+
+ _defaultLimit = Log.getLevel(config.getProperty(PROP_DEFAULTLEVEL, DEFAULT_DEFAULTLEVEL));
+
+ _onScreenLimit = Log.getLevel(config.getProperty(PROP_DISPLAYONSCREENLEVEL, DEFAULT_ONSCREENLEVEL));
+
+ try {
+ String str = config.getProperty(PROP_CONSOLEBUFFERSIZE);
+ if (str == null)
+ _consoleBufferSize = DEFAULT_CONSOLEBUFFERSIZE;
+ else
+ _consoleBufferSize = Integer.parseInt(str);
+ } catch (NumberFormatException nfe) {
+ System.err.println("Invalid console buffer size");
+ nfe.printStackTrace();
+ _consoleBufferSize = DEFAULT_CONSOLEBUFFERSIZE;
+ }
+
+ if (_log.shouldLog(Log.DEBUG))
+ _log.debug("Log set to use the base log file as " + _baseLogfilename);
+
+ parseLimits(config);
+ }
+
+ private void parseLimits(Properties config) {
+ parseLimits(config, PROP_RECORD_PREFIX);
+ }
+ private void parseLimits(Properties config, String recordPrefix) {
+ synchronized (_limits) {
+ _limits.clear();
+ }
+ if (config != null) {
+ for (Iterator iter = config.keySet().iterator(); iter.hasNext();) {
+ String key = (String) iter.next();
+ String val = config.getProperty(key);
+
+ // if we're filtering the records (e.g. logger.record.*) then
+ // filter accordingly (stripping off that prefix for matches)
+ if (recordPrefix != null) {
+ if (key.startsWith(recordPrefix)) {
+ key = key.substring(recordPrefix.length());
+ } else {
+ continue;
+ }
+ }
+
+ LogLimit lim = new LogLimit(key, Log.getLevel(val));
+ //_log.debug("Limit found for " + name + " as " + val);
+ synchronized (_limits) {
+ if (!_limits.contains(lim))
+ _limits.add(lim);
+ }
+ }
+ }
+ updateLimits();
+ }
+
+ /**
+ * Update the existing limit overrides
+ *
+ * @param limits mapping of prefix to log level string (not the log #)
+ */
+ public void setLimits(Properties limits) {
+ parseLimits(limits, null);
+ }
+
+ /**
+ * Update the date format
+ *
+ * @return true if the format was updated, false if it was invalid
+ */
+ public boolean setDateFormat(String format) {
+ if (format == null) return false;
+
+ try {
+ SimpleDateFormat fmt = new SimpleDateFormat(format);
+ _dateFormatPattern = format;
+ _dateFormat = fmt;
+ return true;
+ } catch (IllegalArgumentException iae) {
+ getLog(LogManager.class).error("Date format is invalid [" + format + "]", iae);
+ return false;
+ }
+ }
+
+ /**
+ * Update the log file size limit
+ */
+ public void setFileSize(int numBytes) {
+ if (numBytes > 0)
+ _fileSize = numBytes;
+ }
+
+ public String getDefaultLimit() { return Log.toLevelString(_defaultLimit); }
+ public void setDefaultLimit(String lim) {
+ _defaultLimit = Log.getLevel(lim);
+ updateLimits();
+ }
+
+ /**
+ * Return a mapping of the explicit overrides - path prefix to (text
+ * formatted) limit.
+ *
+ */
+ public Properties getLimits() {
+ Properties rv = new Properties();
+ synchronized (_limits) {
+ for (int i = 0; i < _limits.size(); i++) {
+ LogLimit lim = (LogLimit)_limits.get(i);
+ rv.setProperty(lim.getRootName(), Log.toLevelString(lim.getLimit()));
+ }
+ }
+ return rv;
+ }
+
+ /**
+ * Determine how many bytes are in the given formatted string (5m, 60g, 100k, etc)
+ *
+ */
+ public int getFileSize(String size) {
+ int sz = -1;
+ try {
+ String v = size;
+ char mod = size.toUpperCase().charAt(size.length() - 1);
+ if (!Character.isDigit(mod)) v = size.substring(0, size.length() - 1);
+ int val = Integer.parseInt(v);
+ switch (mod) {
+ case 'K':
+ val *= 1024;
+ break;
+ case 'M':
+ val *= 1024 * 1024;
+ break;
+ case 'G':
+ val *= 1024 * 1024 * 1024;
+ break;
+ default:
+ // blah, noop
+ break;
+ }
+ return val;
+ } catch (Throwable t) {
+ System.err.println("Error parsing config for filesize: [" + size + "]");
+ t.printStackTrace();
+ return -1;
+ }
+ }
+
+ private void updateLimits() {
+ Map logs = null;
+ synchronized (_logs) {
+ logs = new HashMap(_logs);
+ }
+ for (Iterator iter = logs.values().iterator(); iter.hasNext();) {
+ Log log = (Log) iter.next();
+ updateLimit(log);
+ }
+ }
+
+ private void updateLimit(Log log) {
+ List limits = getLimits(log);
+ LogLimit max = null;
+ LogLimit notMax = null;
+ if (limits != null) {
+ for (int i = 0; i < limits.size(); i++) {
+ LogLimit cur = (LogLimit) limits.get(i);
+ if (max == null)
+ max = cur;
+ else {
+ if (cur.getRootName().length() > max.getRootName().length()) {
+ notMax = max;
+ max = cur;
+ }
+ }
+ }
+ }
+ if (max != null) {
+ log.setMinimumPriority(max.getLimit());
+ } else {
+ //if (_log != null)
+ // _log.debug("The log for " + log.getClass() + " has no matching limits");
+ log.setMinimumPriority(_defaultLimit);
+ }
+ }
+
+ private List getLimits(Log log) {
+ ArrayList limits = null; // new ArrayList(4);
+ synchronized (_limits) {
+ for (int i = 0; i < _limits.size(); i++) {
+ LogLimit limit = (LogLimit)_limits.get(i);
+ if (limit.matches(log)) {
+ if (limits == null)
+ limits = new ArrayList(4);
+ limits.add(limit);
+ }
+ }
+ }
+ return limits;
+ }
+
+ ///
+ /// would be friend methods for LogWriter...
+ ///
+ public String getBaseLogfilename() {
+ return _baseLogfilename;
+ }
+
+ public void setBaseLogfilename(String filenamePattern) {
+ _baseLogfilename = filenamePattern;
+ }
+
+ public int getFileSize() {
+ return _fileSize;
+ }
+
+ public int getRotationLimit() {
+ return _rotationLimit;
+ }
+
+ public boolean saveConfig() {
+ String config = createConfig();
+ FileOutputStream fos = null;
+ try {
+ fos = new FileOutputStream(_location);
+ fos.write(config.getBytes());
+ return true;
+ } catch (IOException ioe) {
+ getLog(LogManager.class).error("Error saving the config", ioe);
+ return false;
+ } finally {
+ if (fos != null) try { fos.close(); } catch (IOException ioe) {}
+ }
+ }
+
+ private String createConfig() {
+ StringBuffer buf = new StringBuffer(8*1024);
+ buf.append(PROP_FORMAT).append('=').append(new String(_format)).append('\n');
+ buf.append(PROP_DATEFORMAT).append('=').append(_dateFormatPattern).append('\n');
+ buf.append(PROP_DISPLAYONSCREEN).append('=').append((_displayOnScreen ? "TRUE" : "FALSE")).append('\n');
+ String filenameOverride = _context.getProperty(FILENAME_OVERRIDE_PROP);
+ if (filenameOverride == null)
+ buf.append(PROP_FILENAME).append('=').append(_baseLogfilename).append('\n');
+ else // this isn't technically correct - this could mess with some funky scenarios
+ buf.append(PROP_FILENAME).append('=').append(DEFAULT_FILENAME).append('\n');
+
+ if (_fileSize >= 1024*1024)
+ buf.append(PROP_FILESIZE).append('=').append( (_fileSize / (1024*1024))).append("m\n");
+ else if (_fileSize >= 1024)
+ buf.append(PROP_FILESIZE).append('=').append( (_fileSize / (1024))).append("k\n");
+ else if (_fileSize > 0)
+ buf.append(PROP_FILESIZE).append('=').append(_fileSize).append('\n');
+ // if <= 0, dont specify
+
+ buf.append(PROP_ROTATIONLIMIT).append('=').append(_rotationLimit).append('\n');
+ buf.append(PROP_DEFAULTLEVEL).append('=').append(Log.toLevelString(_defaultLimit)).append('\n');
+ buf.append(PROP_DISPLAYONSCREENLEVEL).append('=').append(Log.toLevelString(_onScreenLimit)).append('\n');
+ buf.append(PROP_CONSOLEBUFFERSIZE).append('=').append(_consoleBufferSize).append('\n');
+
+ buf.append("# log limit overrides:\n");
+
+ TreeMap limits = new TreeMap();
+ synchronized (_limits) {
+ for (int i = 0; i < _limits.size(); i++) {
+ LogLimit lim = (LogLimit)_limits.get(i);
+ limits.put(lim.getRootName(), Log.toLevelString(lim.getLimit()));
+ }
+ }
+ for (Iterator iter = limits.keySet().iterator(); iter.hasNext(); ) {
+ String path = (String)iter.next();
+ String lim = (String)limits.get(path);
+ buf.append(PROP_RECORD_PREFIX).append(path);
+ buf.append('=').append(lim).append('\n');
+ }
+
+ return buf.toString();
+ }
+
+
+ //List _getRecords() { return _records; }
+ List _removeAll() {
+ List vals = null;
+ synchronized (_records) {
+ if (_records.size() <= 0)
+ return null;
+ vals = new ArrayList(_records);
+ _records.clear();
+ }
+ return vals;
+ }
+
+ public char[] getFormat() {
+ return _format;
+ }
+
+ public void setFormat(char fmt[]) {
+ _format = fmt;
+ }
+
+ public SimpleDateFormat getDateFormat() {
+ return _dateFormat;
+ }
+ public String getDateFormatPattern() {
+ return _dateFormatPattern;
+ }
+
+ public static void main(String args[]) {
+ I2PAppContext ctx = new I2PAppContext();
+ Log l1 = ctx.logManager().getLog("test.1");
+ Log l2 = ctx.logManager().getLog("test.2");
+ Log l21 = ctx.logManager().getLog("test.2.1");
+ Log l = ctx.logManager().getLog("test");
+ l.debug("this should fail");
+ l.info("this should pass");
+ l1.warn("this should pass");
+ l1.info("this should fail");
+ l2.error("this should fail");
+ l21.debug("this should pass");
+ l1.error("test exception", new Exception("test"));
+ l1.error("test exception", new Exception("test"));
+ try {
+ Thread.sleep(2 * 1000);
+ } catch (Throwable t) { // nop
+ }
+ System.exit(0);
+ }
+
+ public void shutdown() {
+ _log.log(Log.WARN, "Shutting down logger");
+ _writer.flushRecords(false);
+ }
+
+ private static int __id = 0;
+ private class ShutdownHook extends Thread {
+ private int _id;
+ public ShutdownHook() {
+ _id = ++__id;
+ }
+ public void run() {
+ setName("Log " + _id + " shutdown ");
+ shutdown();
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/util/LogRecord.java b/src/net/i2p/util/LogRecord.java
new file mode 100644
index 0000000..fa1df45
--- /dev/null
+++ b/src/net/i2p/util/LogRecord.java
@@ -0,0 +1,62 @@
+package net.i2p.util;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+/**
+ * Frisbee
+ *
+ */
+class LogRecord {
+ private long _date;
+ private Class _source;
+ private String _name;
+ private String _threadName;
+ private int _priority;
+ private String _message;
+ private Throwable _throwable;
+
+ public LogRecord(Class src, String name, String threadName, int priority, String msg, Throwable t) {
+ _date = Clock.getInstance().now();
+ _source = src;
+ _name = name;
+ _threadName = threadName;
+ _priority = priority;
+ _message = msg;
+ _throwable = t;
+ }
+
+ public long getDate() {
+ return _date;
+ }
+
+ public Class getSource() {
+ return _source;
+ }
+
+ public String getSourceName() {
+ return _name;
+ }
+
+ public String getThreadName() {
+ return _threadName;
+ }
+
+ public int getPriority() {
+ return _priority;
+ }
+
+ public String getMessage() {
+ return _message;
+ }
+
+ public Throwable getThrowable() {
+ return _throwable;
+ }
+}
\ No newline at end of file
diff --git a/src/net/i2p/util/LogRecordFormatter.java b/src/net/i2p/util/LogRecordFormatter.java
new file mode 100644
index 0000000..f280e7b
--- /dev/null
+++ b/src/net/i2p/util/LogRecordFormatter.java
@@ -0,0 +1,104 @@
+package net.i2p.util;
+
+/*
+ * free (adj.): unencumbered; not under the control of others
+ * Written by jrandom in 2003 and released into the public domain
+ * with no warranty of any kind, either expressed or implied.
+ * It probably won't make your computer catch on fire, or eat
+ * your children, but it might. Use at your own risk.
+ *
+ */
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.util.Date;
+
+/**
+ * Render a log record according to the log manager's settings
+ *
+ */
+class LogRecordFormatter {
+ private final static String NL = System.getProperty("line.separator");
+ // arbitrary max length for the classname property (this makes is it lines up nicely)
+ private final static int MAX_WHERE_LENGTH = 30;
+ // if we're going to have one for where... be consistent
+ private final static int MAX_THREAD_LENGTH = 12;
+ private final static int MAX_PRIORITY_LENGTH = 5;
+
+ public static String formatRecord(LogManager manager, LogRecord rec) {
+ int size = 128 + rec.getMessage().length();
+ if (rec.getThrowable() != null)
+ size += 512;
+ StringBuffer buf = new StringBuffer(size);
+ char format[] = manager.getFormat();
+ for (int i = 0; i < format.length; ++i) {
+ switch (format[i]) {
+ case LogManager.DATE:
+ buf.append(getWhen(manager, rec));
+ break;
+ case LogManager.CLASS:
+ buf.append(getWhere(rec));
+ break;
+ case LogManager.THREAD:
+ buf.append(getThread(rec));
+ break;
+ case LogManager.PRIORITY:
+ buf.append(getPriority(rec));
+ break;
+ case LogManager.MESSAGE:
+ buf.append(getWhat(rec));
+ break;
+ default:
+ buf.append(format[i]);
+ break;
+ }
+ }
+ buf.append(NL);
+ if (rec.getThrowable() != null) {
+ ByteArrayOutputStream baos = new ByteArrayOutputStream(512);
+ PrintWriter pw = new PrintWriter(baos, true);
+ rec.getThrowable().printStackTrace(pw);
+ try {
+ pw.flush();
+ baos.flush();
+ } catch (IOException ioe) { // nop
+ }
+ byte tb[] = baos.toByteArray();
+ buf.append(new String(tb));
+ }
+ return buf.toString();
+ }
+
+ private static String getThread(LogRecord logRecord) {
+ return toString(logRecord.getThreadName(), MAX_THREAD_LENGTH);
+ }
+
+ private static String getWhen(LogManager manager, LogRecord logRecord) {
+ return manager.getDateFormat().format(new Date(logRecord.getDate()));
+ }
+
+ private static String getPriority(LogRecord rec) {
+ return toString(Log.toLevelString(rec.getPriority()), MAX_PRIORITY_LENGTH);
+ }
+
+ private static String getWhat(LogRecord rec) {
+ return rec.getMessage();
+ }
+
+ private static String getWhere(LogRecord rec) {
+ String src = (rec.getSource() != null ? rec.getSource().getName() : rec.getSourceName());
+ if (src == null) src = "
+ *
+ *
+ *
+ * native run time: 6090ms (60ms each)
+ * java run time: 68067ms (673ms each)
+ * native = 8.947066860593239% of pure java time
+ *
+ *
+ *
+ * WARN: Native BigInteger library jbigi not loaded - using pure java
+ *
+ *
+ * java run time: 64653ms (640ms each)
+ * However, we couldn't load the native library, so this doesn't test much
+ *
+ *
+ */
+public class NativeBigInteger extends BigInteger {
+ /** did we load the native lib correctly? */
+ private static boolean _nativeOk = false;
+ /**
+ * do we want to dump some basic success/failure info to stderr during
+ * initialization? this would otherwise use the Log component, but this makes
+ * it easier for other systems to reuse this class
+ */
+ private static final boolean _doLog = System.getProperty("jbigi.dontLog") == null;
+
+ private final static String JBIGI_OPTIMIZATION_K6 = "k6";
+ private final static String JBIGI_OPTIMIZATION_K6_2 = "k62";
+ private final static String JBIGI_OPTIMIZATION_K6_3 = "k63";
+ private final static String JBIGI_OPTIMIZATION_ATHLON = "athlon";
+ private final static String JBIGI_OPTIMIZATION_ATHLON64 = "athlon64";
+ private final static String JBIGI_OPTIMIZATION_PENTIUM = "pentium";
+ private final static String JBIGI_OPTIMIZATION_PENTIUMMMX = "pentiummmx";
+ private final static String JBIGI_OPTIMIZATION_PENTIUM2 = "pentium2";
+ private final static String JBIGI_OPTIMIZATION_PENTIUM3 = "pentium3";
+ private final static String JBIGI_OPTIMIZATION_PENTIUM4 = "pentium4";
+ private final static String JBIGI_OPTIMIZATION_VIAC3 = "viac3";
+
+ private static final boolean _isWin = System.getProperty("os.name").startsWith("Win");
+ private static final boolean _isOS2 = System.getProperty("os.name").startsWith("OS/2");
+ private static final boolean _isMac = System.getProperty("os.name").startsWith("Mac");
+ private static final boolean _isLinux = System.getProperty("os.name").toLowerCase().indexOf("linux") != -1;
+ private static final boolean _isFreebsd = System.getProperty("os.name").toLowerCase().indexOf("freebsd") != -1;
+ private static final boolean _isNix = !(_isWin || _isMac || _isOS2);
+ /* libjbigi.so vs jbigi.dll */
+ private static final String _libPrefix = (_isWin || _isOS2 ? "" : "lib");
+ private static final String _libSuffix = (_isWin || _isOS2 ? ".dll" : _isMac ? ".jnilib" : ".so");
+
+ private final static String sCPUType; //The CPU Type to optimize for (one of the above strings)
+
+ static {
+ if (_isMac) // replace with osx/mac friendly jni cpu type detection when we have one
+ sCPUType = null;
+ else
+ sCPUType = resolveCPUType();
+ loadNative();
+ }
+
+ /** Tries to resolve the best type of CPU that we have an optimized jbigi-dll/so for.
+ * @return A string containing the CPU-type or null if CPU type is unknown
+ */
+ private static String resolveCPUType() {
+ boolean is64 = (-1 != System.getProperty("os.arch").indexOf("64"));
+ if (is64)
+ return JBIGI_OPTIMIZATION_ATHLON64;
+
+ try {
+ CPUInfo c = CPUID.getInfo();
+ if (c.IsC3Compatible())
+ return JBIGI_OPTIMIZATION_VIAC3;
+ if (c instanceof AMDCPUInfo) {
+ AMDCPUInfo amdcpu = (AMDCPUInfo) c;
+ if (amdcpu.IsAthlon64Compatible())
+ return JBIGI_OPTIMIZATION_ATHLON64;
+ if (amdcpu.IsAthlonCompatible())
+ return JBIGI_OPTIMIZATION_ATHLON;
+ if (amdcpu.IsK6_3_Compatible())
+ return JBIGI_OPTIMIZATION_K6_3;
+ if (amdcpu.IsK6_2_Compatible())
+ return JBIGI_OPTIMIZATION_K6_2;
+ if (amdcpu.IsK6Compatible())
+ return JBIGI_OPTIMIZATION_K6;
+ } else if (c instanceof IntelCPUInfo) {
+ IntelCPUInfo intelcpu = (IntelCPUInfo) c;
+ if (intelcpu.IsPentium4Compatible())
+ return JBIGI_OPTIMIZATION_PENTIUM4;
+ if (intelcpu.IsPentium3Compatible())
+ return JBIGI_OPTIMIZATION_PENTIUM3;
+ if (intelcpu.IsPentium2Compatible())
+ return JBIGI_OPTIMIZATION_PENTIUM2;
+ if (intelcpu.IsPentiumMMXCompatible())
+ return JBIGI_OPTIMIZATION_PENTIUMMMX;
+ if (intelcpu.IsPentiumCompatible())
+ return JBIGI_OPTIMIZATION_PENTIUM;
+ }
+ return null;
+ } catch (UnknownCPUException e) {
+ return null; //TODO: Log something here maybe..
+ }
+ }
+
+ /**
+ * calculate (base ^ exponent) % modulus.
+ *
+ * @param base
+ * big endian twos complement representation of the base (but it must be positive)
+ * @param exponent
+ * big endian twos complement representation of the exponent
+ * @param modulus
+ * big endian twos complement representation of the modulus
+ * @return big endian twos complement representation of (base ^ exponent) % modulus
+ */
+ public native static byte[] nativeModPow(byte base[], byte exponent[], byte modulus[]);
+
+ /**
+ * Converts a BigInteger byte-array to a 'double'
+ * @param ba Big endian twos complement representation of the BigInteger to convert to a double
+ * @return The plain double-value represented by 'ba'
+ */
+ public native static double nativeDoubleValue(byte ba[]);
+
+ private byte[] cachedBa;
+
+ public NativeBigInteger(byte[] val) {
+ super(val);
+ }
+
+ public NativeBigInteger(int signum, byte[] magnitude) {
+ super(signum, magnitude);
+ }
+
+ public NativeBigInteger(int bitlen, int certainty, Random rnd) {
+ super(bitlen, certainty, rnd);
+ }
+
+ public NativeBigInteger(int numbits, Random rnd) {
+ super(numbits, rnd);
+ }
+
+ public NativeBigInteger(String val) {
+ super(val);
+ }
+
+ public NativeBigInteger(String val, int radix) {
+ super(val, radix);
+ }
+ /**Creates a new NativeBigInteger with the same value
+ * as the supplied BigInteger. Warning!, not very efficent
+ */
+ public NativeBigInteger(BigInteger integer) {
+ //Now, why doesn't sun provide a constructor
+ //like this one in BigInteger?
+ this(integer.toByteArray());
+ }
+
+ public BigInteger modPow(BigInteger exponent, BigInteger m) {
+ if (_nativeOk)
+ return new NativeBigInteger(nativeModPow(toByteArray(), exponent.toByteArray(), m.toByteArray()));
+ else
+ return super.modPow(exponent, m);
+ }
+ public byte[] toByteArray(){
+ if(cachedBa == null) //Since we are immutable it is safe to never update the cached ba after it has initially been generated
+ cachedBa = super.toByteArray();
+ return cachedBa;
+ }
+
+ public double doubleValue() {
+ if (_nativeOk)
+ return nativeDoubleValue(toByteArray());
+ else
+ return super.doubleValue();
+ }
+ /**
+ *
+ * @return True iff native methods will be used by this class
+ */
+ public static boolean isNative(){
+ return _nativeOk;
+ }
+
+ /**
+ *
Try loading it from an explictly build jbigi.dll / libjbigi.so first, before + * looking into a jbigi.jar for any other libraries.
+ * + * @return true if it was loaded successfully, else false + * + */ + private static final boolean loadGeneric(boolean optimized) { + return loadGeneric(getMiddleName(optimized)); + } + private static final boolean loadGeneric(String name) { + try { + if(name == null) + return false; + System.loadLibrary(name); + return true; + } catch (UnsatisfiedLinkError ule) { + return false; + } + } + + /** + *Check all of the jars in the classpath for the file specified by the + * environmental property "jbigi.impl" and load it as the native library + * implementation. For instance, a windows user on a p4 would define + * -Djbigi.impl=win-686 if there is a jbigi.jar in the classpath containing the + * files "win-686", "win-athlon", "freebsd-p4", "linux-p3", where each + * of those files contain the correct binary file for a native library (e.g. + * windows DLL, or a *nix .so).
+ * + *This is a pretty ugly hack, using the general technique illustrated by the + * onion FEC libraries. It works by pulling the resource, writing out the + * byte stream to a temporary file, loading the native library from that file, + * then deleting the file.
+ * + * @return true if it was loaded successfully, else false + * + */ + private static final boolean loadFromResource(boolean optimized) { + String resourceName = getResourceName(optimized); + return loadFromResource(resourceName); + } + private static final boolean loadFromResource(String resourceName) { + if (resourceName == null) return false; + //URL resource = NativeBigInteger.class.getClassLoader().getResource(resourceName); + URL resource = ClassLoader.getSystemResource(resourceName); + if (resource == null) { + if (_doLog) + System.err.println("NOTICE: Resource name [" + resourceName + "] was not found"); + return false; + } + + File outFile = null; + FileOutputStream fos = null; + try { + InputStream libStream = resource.openStream(); + outFile = new File(_libPrefix + "jbigi" + _libSuffix); + fos = new FileOutputStream(outFile); + byte buf[] = new byte[4096*1024]; + while (true) { + int read = libStream.read(buf); + if (read < 0) break; + fos.write(buf, 0, read); + } + fos.close(); + fos = null; + System.load(outFile.getAbsolutePath()); //System.load requires an absolute path to the lib + return true; + } catch (UnsatisfiedLinkError ule) { + if (_doLog) { + System.err.println("ERROR: The resource " + resourceName + + " was not a valid library for this platform"); + ule.printStackTrace(); + } + return false; + } catch (IOException ioe) { + if (_doLog) { + System.err.println("ERROR: Problem writing out the temporary native library data"); + ioe.printStackTrace(); + } + return false; + } finally { + if (fos != null) { + try { fos.close(); } catch (IOException ioe) {} + } + } + } + + private static final String getResourceName(boolean optimized) { + String pref = _libPrefix; + String middle = getMiddleName(optimized); + String suff = _libSuffix; + if(pref == null || middle == null || suff == null) + return null; + return pref+middle+suff; + } + + private static final String getMiddleName(boolean optimized){ + + String sAppend; + if(optimized) + { + if(sCPUType == null) + return null; + else + sAppend = "-"+sCPUType; + }else + sAppend = "-none"; + + if(_isWin) + return "jbigi-windows"+sAppend; // The convention on Windows + if(_isLinux) + return "jbigi-linux"+sAppend; // The convention on linux... + if(_isFreebsd) + return "jbigi-freebsd"+sAppend; // The convention on freebsd... + if(_isMac) + return "jbigi-osx"+sAppend; + if(_isOS2) + return "jbigi-os2"+sAppend; + throw new RuntimeException("Dont know jbigi library name for os type '"+System.getProperty("os.name")+"'"); + } +} diff --git a/src/net/i2p/util/OrderedProperties.java b/src/net/i2p/util/OrderedProperties.java new file mode 100644 index 0000000..831ec9b --- /dev/null +++ b/src/net/i2p/util/OrderedProperties.java @@ -0,0 +1,351 @@ +package net.i2p.util; + +/* + * free (adj.): unencumbered; not under the control of others + * Written by jrandom in 2003 and released into the public domain + * with no warranty of any kind, either expressed or implied. + * It probably won't make your computer catch on fire, or eat + * your children, but it might. Use at your own risk. + * + */ + +import java.io.InputStream; +import java.io.OutputStream; +import java.io.PrintStream; +import java.io.PrintWriter; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.Enumeration; +import java.util.HashMap; +import java.util.Iterator; +import java.util.Map; +import java.util.Properties; +import java.util.Set; +import java.util.TreeSet; + +import net.i2p.data.DataHelper; + +/** + * Properties map that has its keySet ordered consistently (via the key's lexicographical ordering). + * This is useful in environments where maps must stay the same order (e.g. for signature verification) + * This does NOT support remove against the iterators / etc. + * + */ +public class OrderedProperties extends Properties { + private final static Log _log = new Log(OrderedProperties.class); + /** ordered set of keys (strings) stored in the properties */ + private TreeSet _order; + /** simple key=value mapping of the actual data */ + private Map _data; + + /** lock this before touching _order or _data */ + private Object _lock = new Object(); + + public OrderedProperties() { + super(); + _order = new TreeSet(); + _data = new HashMap(); + } + + public boolean contains(Object value) { + return containsValue(value); + } + + public boolean containsKey(Object key) { + synchronized (_lock) { + return _data.containsKey(key); + } + } + + public boolean containsValue(Object value) { + synchronized (_lock) { + return _data.containsValue(value); + } + } + + public boolean equals(Object obj) { + if ((obj != null) && (obj instanceof OrderedProperties)) { + synchronized (_lock) { + return _data.equals(obj); + } + } + + return false; + } + + public int hashCode() { + synchronized (_lock) { + return _data.hashCode(); + } + } + + public boolean isEmpty() { + return size() == 0; + } + + public String getProperty(String key) { + return getProperty((Object) key); + } + + public Object get(Object key) { + return getProperty(key); + } + + private String getProperty(Object key) { + if (key == null) return null; + synchronized (_lock) { + Object rv = _data.get(key); + if ((rv != null) && (rv instanceof String)) return (String) rv; + + return null; + } + } + + public Object setProperty(String key, String val) { + if ((key == null) || (val == null)) throw new IllegalArgumentException("Null values are not supported"); + synchronized (_lock) { + _order.add(key); + Object rv = _data.put(key, val); + return rv; + } + } + + public Object put(Object key, Object val) { + if ((key == null) || (val == null)) throw new NullPointerException("Null values or keys are not allowed"); + if (!(key instanceof String) || !(val instanceof String)) + throw new IllegalArgumentException("Key or value is not a string"); + return setProperty((String) key, (String) val); + } + + public void putAll(Map data) { + if (data == null) return; + for (Iterator iter = data.keySet().iterator(); iter.hasNext();) { + Object key = iter.next(); + Object val = data.get(key); + put(key, val); + } + } + + public Object clone() { + synchronized (_lock) { + OrderedProperties rv = new OrderedProperties(); + rv.putAll(this); + return rv; + } + } + + public void clear() { + synchronized (_lock) { + _order.clear(); + _data.clear(); + } + } + + public int size() { + synchronized (_lock) { + return _order.size(); + } + } + + public Object remove(Object key) { + synchronized (_lock) { + _order.remove(key); + Object rv = _data.remove(key); + return rv; + } + } + + public Set keySet() { + synchronized (_lock) { + return Collections.unmodifiableSortedSet((TreeSet) _order.clone()); + } + } + + public Set entrySet() { + synchronized (_lock) { + return Collections.unmodifiableSet(buildEntrySet((TreeSet) _order.clone())); + } + } + + public Collection values() { + synchronized (_lock) { + Collection values = new ArrayList(_data.size()); + for (Iterator iter = _data.values().iterator(); iter.hasNext();) { + values.add(iter.next()); + } + return values; + } + } + + public Enumeration elements() { + return Collections.enumeration(values()); + } + + public Enumeration keys() { + return Collections.enumeration(keySet()); + } + + public Enumeration propertyNames() { + return Collections.enumeration(keySet()); + } + + public void list(PrintStream out) { // nop + } + + public void list(PrintWriter out) { // nop + } + + public void load(InputStream in) { // nop + } + + //public void save(OutputStream out, String header) {} + public void store(OutputStream out, String header) { // nop + } + + private Set buildEntrySet(Set data) { + TreeSet ts = new TreeSet(); + for (Iterator iter = data.iterator(); iter.hasNext();) { + String key = (String) iter.next(); + String val = getProperty(key); + ts.add(new StringMapEntry(key, val)); + } + return ts; + } + + private static class StringMapEntry implements Map.Entry, Comparable { + private Object _key; + private Object _value; + + public StringMapEntry(String key, String val) { + _key = key; + _value = val; + } + + public Object getKey() { + return _key; + } + + public Object getValue() { + return _value; + } + + public Object setValue(Object value) { + Object old = _value; + _value = value; + return old; + } + + public int compareTo(Object o) { + if (o == null) return -1; + if (o instanceof StringMapEntry) return ((String) getKey()).compareTo((String)((StringMapEntry) o).getKey()); + if (o instanceof String) return ((String) getKey()).compareTo((String)o); + return -2; + } + + public boolean equals(Object o) { + if (o == null) return false; + if (!(o instanceof StringMapEntry)) return false; + StringMapEntry e = (StringMapEntry) o; + return DataHelper.eq(e.getKey(), getKey()) && DataHelper.eq(e.getValue(), getValue()); + } + } + + /// + /// tests + /// + + public static void main(String args[]) { + test(new OrderedProperties()); + _log.debug("After ordered"); + //test(new Properties()); + //System.out.println("After normal"); + test2(); + testThrash(); + } + + private static void test2() { + OrderedProperties p = new OrderedProperties(); + p.setProperty("a", "b"); + p.setProperty("c", "d"); + OrderedProperties p2 = new OrderedProperties(); + try { + p2.putAll(p); + } catch (Throwable t) { + t.printStackTrace(); + } + _log.debug("After test2"); + } + + private static void test(Properties p) { + for (int i = 0; i < 10; i++) + p.setProperty(i + "asdfasdfasdf", "qwerasdfqwer"); + for (Iterator iter = p.keySet().iterator(); iter.hasNext();) { + String key = (String) iter.next(); + String val = p.getProperty(key); + _log.debug("[" + key + "] = [" + val + "]"); + } + p.remove(4 + "asdfasdfasdf"); + _log.debug("After remove"); + for (Iterator iter = p.keySet().iterator(); iter.hasNext();) { + String key = (String) iter.next(); + String val = p.getProperty(key); + _log.debug("[" + key + "] = [" + val + "]"); + } + try { + p.put("nullVal", null); + _log.debug("Null put did NOT fail!"); + } catch (NullPointerException npe) { + _log.debug("Null put failed correctly"); + } + } + + /** + * Set 100 concurrent threads trying to do some operations against a single + * OrderedProperties object a thousand times. Hopefully this will help + * flesh out any synchronization issues. + * + */ + private static void testThrash() { + OrderedProperties prop = new OrderedProperties(); + for (int i = 0; i < 100; i++) + prop.setProperty(i + "", i + " value"); + _log.debug("Thrash properties built"); + for (int i = 0; i < 100; i++) + thrash(prop, i); + } + + private static void thrash(Properties props, int i) { + I2PThread t = new I2PThread(new Thrash(props)); + t.setName("Thrash" + i); + t.start(); + } + + private static class Thrash implements Runnable { + private Properties _props; + + public Thrash(Properties props) { + _props = props; + } + + public void run() { + int numRuns = 1000; + _log.debug("Begin thrashing " + numRuns + " times"); + for (int i = 0; i < numRuns; i++) { + Set keys = _props.keySet(); + //_log.debug("keySet fetched"); + int cur = 0; + for (Iterator iter = keys.iterator(); iter.hasNext();) { + Object o = iter.next(); + Object val = _props.get(o); + //_log.debug("Value " + cur + " fetched"); + cur++; + } + //_log.debug("Values fetched"); + int size = _props.size(); + _log.debug("Size calculated"); + } + _log.debug("Done thrashing " + numRuns + " times"); + } + } +} \ No newline at end of file diff --git a/src/net/i2p/util/PooledRandomSource.java b/src/net/i2p/util/PooledRandomSource.java new file mode 100644 index 0000000..5a6bb7b --- /dev/null +++ b/src/net/i2p/util/PooledRandomSource.java @@ -0,0 +1,204 @@ +package net.i2p.util; + +/* + * free (adj.): unencumbered; not under the control of others + * Written by jrandom in 2005 and released into the public domain + * with no warranty of any kind, either expressed or implied. + * It probably won't make your computer catch on fire, or eat + * your children, but it might. Use at your own risk. + * + */ + +import net.i2p.I2PAppContext; +import net.i2p.crypto.EntropyHarvester; +import net.i2p.data.Base64; + +/** + * Maintain a set of PRNGs to feed the apps + */ +public class PooledRandomSource extends RandomSource { + private Log _log; + protected RandomSource _pool[]; + protected volatile int _nextPool; + + public static final int POOL_SIZE = 16; + /** + * How much random data will we precalculate and feed from (as opposed to on demand + * reseeding, etc). If this is not set, a default will be used (4MB), or if it is + * set to 0, no buffer will be used, otherwise the amount specified will be allocated + * across the pooled PRNGs. + * + */ + public static final String PROP_BUFFER_SIZE = "i2p.prng.totalBufferSizeKB"; + + public PooledRandomSource(I2PAppContext context) { + super(context); + _log = context.logManager().getLog(PooledRandomSource.class); + initializePool(context); + } + + protected void initializePool(I2PAppContext context) { + _pool = new RandomSource[POOL_SIZE]; + + String totalSizeProp = context.getProperty(PROP_BUFFER_SIZE); + int totalSize = -1; + if (totalSizeProp != null) { + try { + totalSize = Integer.parseInt(totalSizeProp); + } catch (NumberFormatException nfe) { + totalSize = -1; + } + } + + byte buf[] = new byte[1024]; + initSeed(buf); + for (int i = 0; i < POOL_SIZE; i++) { + if (totalSize < 0) + _pool[i] = new BufferedRandomSource(context); + else if (totalSize > 0) + _pool[i] = new BufferedRandomSource(context, (totalSize*1024) / POOL_SIZE); + else + _pool[i] = new RandomSource(context); + _pool[i].setSeed(buf); + if (i > 0) { + _pool[i-1].nextBytes(buf); + _pool[i].setSeed(buf); + } + } + _pool[0].nextBytes(buf); + System.out.println("seeded and initialized: " + Base64.encode(buf)); + _nextPool = 0; + } + + private final RandomSource pickPRNG() { + // how much more explicit can we get? + int cur = _nextPool; + cur = cur % POOL_SIZE; + RandomSource rv = _pool[cur]; + cur++; + cur = cur % POOL_SIZE; + _nextPool = cur; + return rv; + } + + /** + * According to the java docs (http://java.sun.com/j2se/1.4.1/docs/api/java/util/Random.html#nextInt(int)) + * nextInt(n) should return a number between 0 and n (including 0 and excluding n). However, their pseudocode, + * as well as sun's, kaffe's, and classpath's implementation INCLUDES NEGATIVE VALUES. + * WTF. Ok, so we're going to have it return between 0 and n (including 0, excluding n), since + * thats what it has been used for. + * + */ + public int nextInt(int n) { + RandomSource prng = pickPRNG(); + synchronized (prng) { + return prng.nextInt(n); + } + } + + /** + * Like the modified nextInt, nextLong(n) returns a random number from 0 through n, + * including 0, excluding n. + */ + public long nextLong(long n) { + RandomSource prng = pickPRNG(); + synchronized (prng) { + return prng.nextLong(n); + } + } + + /** + * override as synchronized, for those JVMs that don't always pull via + * nextBytes (cough ibm) + */ + public boolean nextBoolean() { + RandomSource prng = pickPRNG(); + synchronized (prng) { + return prng.nextBoolean(); + } + } + /** + * override as synchronized, for those JVMs that don't always pull via + * nextBytes (cough ibm) + */ + public void nextBytes(byte buf[]) { + RandomSource prng = pickPRNG(); + synchronized (prng) { + prng.nextBytes(buf); + } + } + /** + * override as synchronized, for those JVMs that don't always pull via + * nextBytes (cough ibm) + */ + public double nextDouble() { + RandomSource prng = pickPRNG(); + synchronized (prng) { + return prng.nextDouble(); + } + } + /** + * override as synchronized, for those JVMs that don't always pull via + * nextBytes (cough ibm) + */ + public float nextFloat() { + RandomSource prng = pickPRNG(); + synchronized (prng) { + return prng.nextFloat(); + } + } + /** + * override as synchronized, for those JVMs that don't always pull via + * nextBytes (cough ibm) + */ + public double nextGaussian() { + RandomSource prng = pickPRNG(); + synchronized (prng) { + return prng.nextGaussian(); + } + } + /** + * override as synchronized, for those JVMs that don't always pull via + * nextBytes (cough ibm) + */ + public int nextInt() { + RandomSource prng = pickPRNG(); + synchronized (prng) { + return prng.nextInt(); + } + } + /** + * override as synchronized, for those JVMs that don't always pull via + * nextBytes (cough ibm) + */ + public long nextLong() { + RandomSource prng = pickPRNG(); + synchronized (prng) { + return prng.nextLong(); + } + } + + public EntropyHarvester harvester() { + RandomSource prng = pickPRNG(); + return prng.harvester(); + } + + public static void main(String args[]) { + //PooledRandomSource prng = new PooledRandomSource(I2PAppContext.getGlobalContext()); + long start = System.currentTimeMillis(); + RandomSource prng = I2PAppContext.getGlobalContext().random(); + long created = System.currentTimeMillis(); + System.out.println("prng type: " + prng.getClass().getName()); + int size = 8*1024*1024; + try { + java.io.FileOutputStream out = new java.io.FileOutputStream("random.file"); + for (int i = 0; i < size; i++) { + out.write(prng.nextInt()); + } + out.close(); + } catch (Exception e) { e.printStackTrace(); } + long done = System.currentTimeMillis(); + System.out.println("Written to random.file: create took " + (created-start) + ", generate took " + (done-created)); + prng.saveSeed(); + } +} diff --git a/src/net/i2p/util/RandomSource.java b/src/net/i2p/util/RandomSource.java new file mode 100644 index 0000000..51e340a --- /dev/null +++ b/src/net/i2p/util/RandomSource.java @@ -0,0 +1,211 @@ +package net.i2p.util; + +/* + * free (adj.): unencumbered; not under the control of others + * Written by jrandom in 2003 and released into the public domain + * with no warranty of any kind, either expressed or implied. + * It probably won't make your computer catch on fire, or eat + * your children, but it might. Use at your own risk. + * + */ + +import java.security.SecureRandom; + +import net.i2p.I2PAppContext; +import net.i2p.crypto.EntropyHarvester; +import net.i2p.data.Base64; + +import java.io.File; +import java.io.FileInputStream; +import java.io.FileOutputStream; +import java.io.IOException; + +/** + * Singleton for whatever PRNG i2p uses. + * + * @author jrandom + */ +public class RandomSource extends SecureRandom implements EntropyHarvester { + private Log _log; + private EntropyHarvester _entropyHarvester; + protected I2PAppContext _context; + + public RandomSource(I2PAppContext context) { + super(); + _context = context; + _log = context.logManager().getLog(RandomSource.class); + // when we replace to have hooks for fortuna (etc), replace with + // a factory (or just a factory method) + _entropyHarvester = this; + } + public static RandomSource getInstance() { + return I2PAppContext.getGlobalContext().random(); + } + + /** + * According to the java docs (http://java.sun.com/j2se/1.4.1/docs/api/java/util/Random.html#nextInt(int)) + * nextInt(n) should return a number between 0 and n (including 0 and excluding n). However, their pseudocode, + * as well as sun's, kaffe's, and classpath's implementation INCLUDES NEGATIVE VALUES. + * WTF. Ok, so we're going to have it return between 0 and n (including 0, excluding n), since + * thats what it has been used for. + * + */ + public int nextInt(int n) { + if (n == 0) return 0; + int val = super.nextInt(n); + if (val < 0) val = 0 - val; + if (val >= n) val = val % n; + return val; + } + + /** + * Like the modified nextInt, nextLong(n) returns a random number from 0 through n, + * including 0, excluding n. + */ + public long nextLong(long n) { + long v = super.nextLong(); + if (v < 0) v = 0 - v; + if (v >= n) v = v % n; + return v; + } + + /** + * override as synchronized, for those JVMs that don't always pull via + * nextBytes (cough ibm) + */ + public boolean nextBoolean() { return super.nextBoolean(); } + /** + * override as synchronized, for those JVMs that don't always pull via + * nextBytes (cough ibm) + */ + public void nextBytes(byte buf[]) { super.nextBytes(buf); } + /** + * override as synchronized, for those JVMs that don't always pull via + * nextBytes (cough ibm) + */ + public double nextDouble() { return super.nextDouble(); } + /** + * override as synchronized, for those JVMs that don't always pull via + * nextBytes (cough ibm) + */ + public float nextFloat() { return super.nextFloat(); } + /** + * override as synchronized, for those JVMs that don't always pull via + * nextBytes (cough ibm) + */ + public double nextGaussian() { return super.nextGaussian(); } + /** + * override as synchronized, for those JVMs that don't always pull via + * nextBytes (cough ibm) + */ + public int nextInt() { return super.nextInt(); } + /** + * override as synchronized, for those JVMs that don't always pull via + * nextBytes (cough ibm) + */ + public long nextLong() { return super.nextLong(); } + + public EntropyHarvester harvester() { return _entropyHarvester; } + + public void feedEntropy(String source, long data, int bitoffset, int bits) { + if (bitoffset == 0) + setSeed(data); + } + + public void feedEntropy(String source, byte[] data, int offset, int len) { + if ( (offset == 0) && (len == data.length) ) { + setSeed(data); + } else { + setSeed(_context.sha().calculateHash(data, offset, len).getData()); + } + } + + public void loadSeed() { + byte buf[] = new byte[1024]; + if (initSeed(buf)) + setSeed(buf); + } + + public void saveSeed() { + byte buf[] = new byte[1024]; + nextBytes(buf); + writeSeed(buf); + } + + private static final String SEEDFILE = "prngseed.rnd"; + + public static final void writeSeed(byte buf[]) { + File f = new File(SEEDFILE); + FileOutputStream fos = null; + try { + fos = new FileOutputStream(f); + fos.write(buf); + } catch (IOException ioe) { + // ignore + } finally { + if (fos != null) try { fos.close(); } catch (IOException ioe) {} + } + } + + public final boolean initSeed(byte buf[]) { + // why urandom? because /dev/random blocks, and there are arguments + // suggesting such blockages are largely meaningless + boolean ok = seedFromFile("/dev/urandom", buf); + // we merge (XOR) in the data from /dev/urandom with our own seedfile + ok = seedFromFile("prngseed.rnd", buf) || ok; + return ok; + } + + private static final boolean seedFromFile(String filename, byte buf[]) { + File f = new File(filename); + if (f.exists()) { + FileInputStream fis = null; + try { + fis = new FileInputStream(f); + int read = 0; + byte tbuf[] = new byte[buf.length]; + while (read < buf.length) { + int curRead = fis.read(tbuf, read, tbuf.length - read); + if (curRead < 0) + break; + read += curRead; + } + for (int i = 0; i < read; i++) + buf[i] ^= tbuf[i]; + return true; + } catch (IOException ioe) { + // ignore + } finally { + if (fis != null) try { fis.close(); } catch (IOException ioe) {} + } + } + return false; + } + + public static void main(String args[]) { + for (int j = 0; j < 2; j++) { + RandomSource rs = new RandomSource(I2PAppContext.getGlobalContext()); + byte buf[] = new byte[1024]; + boolean seeded = rs.initSeed(buf); + System.out.println("PRNG class hierarchy: "); + Class c = rs.getClass(); + while (c != null) { + System.out.println("\t" + c.getName()); + c = c.getSuperclass(); + } + System.out.println("Provider: \n" + rs.getProvider()); + if (seeded) { + System.out.println("Initialized seed: " + Base64.encode(buf)); + rs.setSeed(buf); + } + for (int i = 0; i < 64; i++) rs.nextBytes(buf); + rs.saveSeed(); + } + } + + // noop + private static class DummyEntropyHarvester implements EntropyHarvester { + public void feedEntropy(String source, long data, int bitoffset, int bits) {} + public void feedEntropy(String source, byte[] data, int offset, int len) {} + } +} diff --git a/src/net/i2p/util/ResettableGZIPInputStream.java b/src/net/i2p/util/ResettableGZIPInputStream.java new file mode 100644 index 0000000..2896fa5 --- /dev/null +++ b/src/net/i2p/util/ResettableGZIPInputStream.java @@ -0,0 +1,281 @@ +package net.i2p.util; + +import java.io.ByteArrayInputStream; +import java.io.IOException; +import java.io.InputStream; + +import java.util.zip.CRC32; +import java.util.zip.Inflater; +import java.util.zip.InflaterInputStream; +import java.util.zip.GZIPInputStream; + +/** + * GZIP implementation per + * RFC 1952, reusing + * java's standard CRC32 and Inflater and InflaterInputStream implementations. + * The main difference is that this implementation allows its state to be + * reset to initial values, and hence reused, while the standard + * GZIPInputStream reads the GZIP header from the stream on instantiation. + * + */ +public class ResettableGZIPInputStream extends InflaterInputStream { + private static final int FOOTER_SIZE = 8; // CRC32 + ISIZE + private static final boolean DEBUG = false; + /** keep a typesafe copy of (LookaheadInputStream)in */ + private LookaheadInputStream _lookaheadStream; + private CRC32 _crc32; + private byte _buf1[] = new byte[1]; + private boolean _complete; + + /** + * Build a new GZIP stream without a bound compressed stream. You need + * to initialize this with initialize(compressedStream) when you want to + * decompress a stream. + */ + public ResettableGZIPInputStream() { + super(new LookaheadInputStream(FOOTER_SIZE), new Inflater(true)); + _lookaheadStream = (LookaheadInputStream)in; + _crc32 = new CRC32(); + _complete = false; + } + public ResettableGZIPInputStream(InputStream compressedStream) throws IOException { + this(); + initialize(compressedStream); + } + + /** + * Blocking call to initialize this stream with the data from the given + * compressed stream. + * + */ + public void initialize(InputStream compressedStream) throws IOException { + len = 0; + inf.reset(); + _complete = false; + _crc32.reset(); + _buf1[0] = 0x0; + // blocking call to read the footer/lookahead, and use the compressed + // stream as the source for further lookahead bytes + _lookaheadStream.initialize(compressedStream); + // now blocking call to read and verify the GZIP header from the + // lookahead stream + verifyHeader(); + } + + public int read() throws IOException { + if (_complete) { + // shortcircuit so the inflater doesn't try to refill + // with the footer's data (which would fail, causing ZLIB err) + return -1; + } + int read = read(_buf1, 0, 1); + if (read == -1) + return -1; + else + return _buf1[0]; + } + + public int read(byte buf[]) throws IOException { + return read(buf, 0, buf.length); + } + public int read(byte buf[], int off, int len) throws IOException { + if (_complete) { + // shortcircuit so the inflater doesn't try to refill + // with the footer's data (which would fail, causing ZLIB err) + return -1; + } + int read = super.read(buf, off, len); + if (read == -1) { + verifyFooter(); + return -1; + } else { + _crc32.update(buf, off, read); + if (_lookaheadStream.getEOFReached()) { + verifyFooter(); + inf.reset(); // so it doesn't bitch about missing data... + _complete = true; + } + return read; + } + } + + long getCurrentCRCVal() { return _crc32.getValue(); } + + void verifyFooter() throws IOException { + byte footer[] = _lookaheadStream.getFooter(); + + long expectedCRCVal = _crc32.getValue(); + + // damn RFC writing their bytes backwards... + if (!(footer[0] == (byte)(expectedCRCVal & 0xFF))) + throw new IOException("footer[0]=" + footer[0] + " expectedCRC[0]=" + + (expectedCRCVal & 0xFF)); + if (!(footer[1] == (byte)(expectedCRCVal >>> 8))) + throw new IOException("footer[1]=" + footer[1] + " expectedCRC[1]=" + + ((expectedCRCVal >>> 8) & 0xFF)); + if (!(footer[2] == (byte)(expectedCRCVal >>> 16))) + throw new IOException("footer[2]=" + footer[2] + " expectedCRC[2]=" + + ((expectedCRCVal >>> 16) & 0xFF)); + if (!(footer[3] == (byte)(expectedCRCVal >>> 24))) + throw new IOException("footer[3]=" + footer[3] + " expectedCRC[3]=" + + ((expectedCRCVal >>> 24) & 0xFF)); + + int expectedSizeVal = inf.getTotalOut(); + + if (!(footer[4] == (byte)expectedSizeVal)) + throw new IOException("footer[4]=" + footer[4] + " expectedSize[0]=" + + (expectedSizeVal & 0xFF)); + if (!(footer[5] == (byte)(expectedSizeVal >>> 8))) + throw new IOException("footer[5]=" + footer[5] + " expectedSize[1]=" + + ((expectedSizeVal >>> 8) & 0xFF)); + if (!(footer[6] == (byte)(expectedSizeVal >>> 16))) + throw new IOException("footer[6]=" + footer[6] + " expectedSize[2]=" + + ((expectedSizeVal >>> 16) & 0xFF)); + if (!(footer[7] == (byte)(expectedSizeVal >>> 24))) + throw new IOException("footer[7]=" + footer[7] + " expectedSize[3]=" + + ((expectedSizeVal >>> 24) & 0xFF)); + } + + /** + * Make sure the header is valid, throwing an IOException if its b0rked. + */ + private void verifyHeader() throws IOException { + int c = in.read(); + if (c != 0x1F) throw new IOException("First magic byte was wrong [" + c + "]"); + c = in.read(); + if (c != 0x8B) throw new IOException("Second magic byte was wrong [" + c + "]"); + c = in.read(); + if (c != 0x08) throw new IOException("Compression format is invalid [" + c + "]"); + + int flags = in.read(); + + // snag (and ignore) the MTIME + c = in.read(); + if (c == -1) throw new IOException("EOF on MTIME0 [" + c + "]"); + c = in.read(); + if (c == -1) throw new IOException("EOF on MTIME1 [" + c + "]"); + c = in.read(); + if (c == -1) throw new IOException("EOF on MTIME2 [" + c + "]"); + c = in.read(); + if (c == -1) throw new IOException("EOF on MTIME3 [" + c + "]"); + + c = in.read(); + if ( (c != 0x00) && (c != 0x02) && (c != 0x04) ) + throw new IOException("Invalid extended flags [" + c + "]"); + + c = in.read(); // ignore creator OS + + // handle flags... + if (0 != (flags & (1<<5))) { + // extra header, read and ignore + int len = 0; + c = in.read(); + if (c == -1) throw new IOException("EOF reading the extra header"); + len = c; + c = in.read(); + if (c == -1) throw new IOException("EOF reading the extra header"); + len += (c << 8); + + // now skip that data + for (int i = 0; i < len; i++) { + c = in.read(); + if (c == -1) throw new IOException("EOF reading the extra header's body"); + } + } + + if (0 != (flags & (1 << 4))) { + // ignore the name + c = in.read(); + while (c != 0) { + if (c == -1) throw new IOException("EOF reading the name"); + c = in.read(); + } + } + + if (0 != (flags & (1 << 3))) { + // ignore the comment + c = in.read(); + while (c != 0) { + if (c == -1) throw new IOException("EOF reading the comment"); + c = in.read(); + } + } + + if (0 != (flags & (1 << 6))) { + // ignore the header CRC16 (we still check the body CRC32) + c = in.read(); + if (c == -1) throw new IOException("EOF reading the CRC16"); + c = in.read(); + if (c == -1) throw new IOException("EOF reading the CRC16"); + } + } + + public static void main(String args[]) { + for (int i = 129; i < 64*1024; i++) { + if (!test(i)) return; + } + + byte orig[] = "ho ho ho, merry christmas".getBytes(); + try { + java.io.ByteArrayOutputStream baos = new java.io.ByteArrayOutputStream(64); + java.util.zip.GZIPOutputStream o = new java.util.zip.GZIPOutputStream(baos); + o.write(orig); + o.finish(); + o.flush(); + o.close(); + byte compressed[] = baos.toByteArray(); + + ResettableGZIPInputStream i = new ResettableGZIPInputStream(); + i.initialize(new ByteArrayInputStream(compressed)); + byte readBuf[] = new byte[128]; + int read = i.read(readBuf); + if (read != orig.length) + throw new RuntimeException("read=" + read); + for (int j = 0; j < read; j++) + if (readBuf[j] != orig[j]) + throw new RuntimeException("wtf, j=" + j + " readBuf=" + readBuf[j] + " orig=" + orig[j]); + boolean ok = (-1 == i.read()); + if (!ok) throw new RuntimeException("wtf, not EOF after the data?"); + System.out.println("Match ok"); + } catch (Exception e) { + e.printStackTrace(); + } + } + + private static boolean test(int size) { + byte b[] = new byte[size]; + new java.util.Random().nextBytes(b); + try { + java.io.ByteArrayOutputStream baos = new java.io.ByteArrayOutputStream(size); + java.util.zip.GZIPOutputStream o = new java.util.zip.GZIPOutputStream(baos); + o.write(b); + o.finish(); + o.flush(); + byte compressed[] = baos.toByteArray(); + + ResettableGZIPInputStream in = new ResettableGZIPInputStream(new ByteArrayInputStream(compressed)); + java.io.ByteArrayOutputStream baos2 = new java.io.ByteArrayOutputStream(size); + byte rbuf[] = new byte[512]; + while (true) { + int read = in.read(rbuf); + if (read == -1) + break; + baos2.write(rbuf, 0, read); + } + byte rv[] = baos2.toByteArray(); + if (rv.length != b.length) + throw new RuntimeException("read length: " + rv.length + " expected: " + b.length); + + if (!net.i2p.data.DataHelper.eq(rv, 0, b, 0, b.length)) { + throw new RuntimeException("foo, read=" + rv.length); + } else { + System.out.println("match, w00t @ " + size); + return true; + } + } catch (Exception e) { + System.out.println("Error dealing with size=" + size + ": " + e.getMessage()); + e.printStackTrace(); + return false; + } + } +} diff --git a/src/net/i2p/util/ResettableGZIPOutputStream.java b/src/net/i2p/util/ResettableGZIPOutputStream.java new file mode 100644 index 0000000..3d5184f --- /dev/null +++ b/src/net/i2p/util/ResettableGZIPOutputStream.java @@ -0,0 +1,171 @@ +package net.i2p.util; + +import java.io.ByteArrayOutputStream; +import java.io.ByteArrayInputStream; +import java.io.IOException; +import java.io.OutputStream; + +import java.util.zip.CRC32; +import java.util.zip.Deflater; +import java.util.zip.DeflaterOutputStream; +import java.util.zip.GZIPOutputStream; +import java.util.zip.GZIPInputStream; +import net.i2p.data.DataHelper; + +/** + * GZIP implementation per + * RFC 1952, reusing + * java's standard CRC32 and Deflater implementations. The main difference + * is that this implementation allows its state to be reset to initial + * values, and hence reused, while the standard GZIPOutputStream writes the + * GZIP header to the stream on instantiation, rather than on first write. + * + */ +public class ResettableGZIPOutputStream extends DeflaterOutputStream { + /** has the header been written out yet? */ + private boolean _headerWritten; + /** how much data is in the uncompressed stream? */ + private long _writtenSize; + private CRC32 _crc32; + private static final boolean DEBUG = false; + + public ResettableGZIPOutputStream(OutputStream o) { + super(o, new Deflater(9, true)); + _headerWritten = false; + _crc32 = new CRC32(); + } + /** + * Reinitialze everything so we can write a brand new gzip output stream + * again. + */ + public void reset() { + if (DEBUG) + System.out.println("Resetting (writtenSize=" + _writtenSize + ")"); + def.reset(); + _crc32.reset(); + _writtenSize = 0; + _headerWritten = false; + } + + private static final byte[] HEADER = new byte[] { + (byte)0x1F, (byte)0x8b, // magic bytes + 0x08, // compression format == DEFLATE + 0x00, // flags (NOT using CRC16, filename, etc) + 0x00, 0x00, 0x00, 0x00, // no modification time available (don't leak this!) + 0x02, // maximum compression + (byte)0xFF // unknown creator OS (!!!) + }; + + /** + * obviously not threadsafe, but its a stream, thats standard + */ + private void ensureHeaderIsWritten() throws IOException { + if (_headerWritten) return; + if (DEBUG) System.out.println("Writing header"); + out.write(HEADER); + _headerWritten = true; + } + + private void writeFooter() throws IOException { + // damn RFC writing their bytes backwards... + long crcVal = _crc32.getValue(); + out.write((int)(crcVal & 0xFF)); + out.write((int)((crcVal >>> 8) & 0xFF)); + out.write((int)((crcVal >>> 16) & 0xFF)); + out.write((int)((crcVal >>> 24) & 0xFF)); + + long sizeVal = _writtenSize; // % (1 << 31) // *redundant* + out.write((int)(sizeVal & 0xFF)); + out.write((int)((sizeVal >>> 8) & 0xFF)); + out.write((int)((sizeVal >>> 16) & 0xFF)); + out.write((int)((sizeVal >>> 24) & 0xFF)); + out.flush(); + if (DEBUG) { + System.out.println("Footer written: crcVal=" + crcVal + " sizeVal=" + sizeVal + " written=" + _writtenSize); + System.out.println("size hex: " + Long.toHexString(sizeVal)); + System.out.print( "size2 hex:" + Long.toHexString((int)(sizeVal & 0xFF))); + System.out.print( Long.toHexString((int)((sizeVal >>> 8) & 0xFF))); + System.out.print( Long.toHexString((int)((sizeVal >>> 16) & 0xFF))); + System.out.print( Long.toHexString((int)((sizeVal >>> 24) & 0xFF))); + System.out.println(); + } + } + + public void close() throws IOException { + finish(); + super.close(); + } + public void finish() throws IOException { + ensureHeaderIsWritten(); + super.finish(); + writeFooter(); + } + + public void write(int b) throws IOException { + ensureHeaderIsWritten(); + _crc32.update(b); + _writtenSize++; + super.write(b); + } + public void write(byte buf[]) throws IOException { + write(buf, 0, buf.length); + } + public void write(byte buf[], int off, int len) throws IOException { + ensureHeaderIsWritten(); + _crc32.update(buf, off, len); + _writtenSize += len; + super.write(buf, off, len); + } + + public static void main(String args[]) { + for (int i = 0; i < 2; i++) + test(); + } + private static void test() { + byte b[] = "hi, how are you today?".getBytes(); + try { + ByteArrayOutputStream baos = new ByteArrayOutputStream(64); + ResettableGZIPOutputStream o = new ResettableGZIPOutputStream(baos); + o.write(b); + o.finish(); + o.flush(); + byte compressed[] = baos.toByteArray(); + + ByteArrayOutputStream baos2 = new ByteArrayOutputStream(); + SnoopGZIPOutputStream gzo = new SnoopGZIPOutputStream(baos2); + gzo.write(b); + gzo.finish(); + gzo.flush(); + long value = gzo.getCRC().getValue(); + byte compressed2[] = baos2.toByteArray(); + System.out.println("CRC32 values: Resettable = " + o._crc32.getValue() + + " GZIP = " + value); + + System.out.print("Resettable compressed data: "); + for (int i = 0; i < compressed.length; i++) + System.out.print(Integer.toHexString(compressed[i] & 0xFF) + " "); + System.out.println(); + System.out.print(" GZIP compressed data: "); + for (int i = 0; i < compressed2.length; i++) + System.out.print(Integer.toHexString(compressed2[i] & 0xFF) + " "); + System.out.println(); + + GZIPInputStream in = new GZIPInputStream(new ByteArrayInputStream(compressed)); + byte rv[] = new byte[128]; + int read = in.read(rv); + if (!DataHelper.eq(rv, 0, b, 0, b.length)) + throw new RuntimeException("foo, read=" + read); + else + System.out.println("match, w00t"); + } catch (Exception e) { e.printStackTrace(); } + } + + /** just for testing/verification, expose the CRC32 values */ + private static final class SnoopGZIPOutputStream extends GZIPOutputStream { + public SnoopGZIPOutputStream(OutputStream o) throws IOException { + super(o); + } + public CRC32 getCRC() { return crc; } + } +} + diff --git a/src/net/i2p/util/ReusableGZIPInputStream.java b/src/net/i2p/util/ReusableGZIPInputStream.java new file mode 100644 index 0000000..832d242 --- /dev/null +++ b/src/net/i2p/util/ReusableGZIPInputStream.java @@ -0,0 +1,126 @@ +package net.i2p.util; + +import java.io.ByteArrayOutputStream; +import java.io.ByteArrayInputStream; +import java.io.IOException; +import java.io.OutputStream; +import java.util.ArrayList; +import java.util.zip.GZIPOutputStream; +import java.util.zip.GZIPInputStream; +import net.i2p.data.DataHelper; + +/** + * Provide a cache of reusable GZIP streams, each handling up to 32KB without + * expansion. + * + */ +public class ReusableGZIPInputStream extends ResettableGZIPInputStream { + private static ArrayList _available = new ArrayList(8); + /** + * Pull a cached instance + */ + public static ReusableGZIPInputStream acquire() { + ReusableGZIPInputStream rv = null; + synchronized (_available) { + if (_available.size() > 0) + rv = (ReusableGZIPInputStream)_available.remove(0); + } + if (rv == null) { + rv = new ReusableGZIPInputStream(); + } + return rv; + } + /** + * Release an instance back into the cache (this will reset the + * state) + */ + public static void release(ReusableGZIPInputStream released) { + synchronized (_available) { + if (_available.size() < 8) + _available.add(released); + } + } + + private ReusableGZIPInputStream() { super(); } + + public static void main(String args[]) { + for (int i = 0; i < 2; i++) + test(); + for (int i = 0; i < 64*1024; i++) { + if (!test(i)) break; + } + } + private static void test() { + byte b[] = "hi, how are you today?".getBytes(); + try { + ByteArrayOutputStream baos = new ByteArrayOutputStream(64); + GZIPOutputStream o = new GZIPOutputStream(baos); + o.write(b); + o.finish(); + o.flush(); + byte compressed[] = baos.toByteArray(); + + ReusableGZIPInputStream in = ReusableGZIPInputStream.acquire(); + in.initialize(new ByteArrayInputStream(compressed)); + byte rv[] = new byte[128]; + int read = in.read(rv); + if (!DataHelper.eq(rv, 0, b, 0, b.length)) + throw new RuntimeException("foo, read=" + read); + else + System.out.println("match, w00t"); + ReusableGZIPInputStream.release(in); + } catch (Exception e) { e.printStackTrace(); } + } + + private static boolean test(int size) { + byte b[] = new byte[size]; + new java.util.Random().nextBytes(b); + try { + ByteArrayOutputStream baos = new ByteArrayOutputStream(size); + GZIPOutputStream o = new GZIPOutputStream(baos); + o.write(b); + o.finish(); + o.flush(); + byte compressed[] = baos.toByteArray(); + + ReusableGZIPInputStream in = ReusableGZIPInputStream.acquire(); + in.initialize(new ByteArrayInputStream(compressed)); + ByteArrayOutputStream baos2 = new ByteArrayOutputStream(size); + byte rbuf[] = new byte[128]; + try { + while (true) { + int read = in.read(rbuf); + if (read == -1) + break; + baos2.write(rbuf, 0, read); + } + } catch (IOException ioe) { + ioe.printStackTrace(); + long crcVal = in.getCurrentCRCVal(); + //try { in.verifyFooter(); } catch (IOException ioee) { + // ioee.printStackTrace(); + //} + throw ioe; + } catch (RuntimeException re) { + re.printStackTrace(); + throw re; + } + ReusableGZIPInputStream.release(in); + byte rv[] = baos2.toByteArray(); + if (rv.length != b.length) + throw new RuntimeException("read length: " + rv.length + " expected: " + b.length); + + if (!DataHelper.eq(rv, 0, b, 0, b.length)) { + throw new RuntimeException("foo, read=" + rv.length); + } else { + System.out.println("match, w00t"); + return true; + } + } catch (Exception e) { + System.out.println("Error dealing with size=" + size + ": " + e.getMessage()); + e.printStackTrace(); + return false; + } + } +} + diff --git a/src/net/i2p/util/ReusableGZIPOutputStream.java b/src/net/i2p/util/ReusableGZIPOutputStream.java new file mode 100644 index 0000000..ebdd1f3 --- /dev/null +++ b/src/net/i2p/util/ReusableGZIPOutputStream.java @@ -0,0 +1,124 @@ +package net.i2p.util; + +import java.io.ByteArrayOutputStream; +import java.io.ByteArrayInputStream; +import java.io.IOException; +import java.io.OutputStream; +import java.util.ArrayList; +import java.util.zip.GZIPOutputStream; +import java.util.zip.GZIPInputStream; +import net.i2p.data.DataHelper; + +/** + * Provide a cache of reusable GZIP streams, each handling up to 32KB without + * expansion. + * + */ +public class ReusableGZIPOutputStream extends ResettableGZIPOutputStream { + private static ArrayList _available = new ArrayList(16); + /** + * Pull a cached instance + */ + public static ReusableGZIPOutputStream acquire() { + ReusableGZIPOutputStream rv = null; + synchronized (_available) { + if (_available.size() > 0) + rv = (ReusableGZIPOutputStream)_available.remove(0); + } + if (rv == null) { + rv = new ReusableGZIPOutputStream(); + } + return rv; + } + + /** + * Release an instance back into the cache (this will discard any + * state) + */ + public static void release(ReusableGZIPOutputStream out) { + out.reset(); + synchronized (_available) { + if (_available.size() < 16) + _available.add(out); + } + } + + private ByteArrayOutputStream _buffer = null; + private ReusableGZIPOutputStream() { + super(new ByteArrayOutputStream(40*1024)); + _buffer = (ByteArrayOutputStream)out; + } + /** clear the data so we can start again afresh */ + public void reset() { + super.reset(); + _buffer.reset(); + } + /** pull the contents of the stream written */ + public byte[] getData() { return _buffer.toByteArray(); } + + public static void main(String args[]) { + try { + for (int i = 0; i < 2; i++) + test(); + for (int i = 500; i < 64*1024; i++) { + if (!test(i)) break; + } + } catch (Exception e) { e.printStackTrace(); } + try { Thread.sleep(10*1000); } catch (InterruptedException ie){} + System.out.println("After all tests are complete..."); + } + private static void test() { + byte b[] = "hi, how are you today?".getBytes(); + try { + ReusableGZIPOutputStream o = ReusableGZIPOutputStream.acquire(); + o.write(b); + o.finish(); + o.flush(); + byte compressed[] = o.getData(); + ReusableGZIPOutputStream.release(o); + + GZIPInputStream in = new GZIPInputStream(new ByteArrayInputStream(compressed)); + byte rv[] = new byte[128]; + int read = in.read(rv); + if (!DataHelper.eq(rv, 0, b, 0, b.length)) + throw new RuntimeException("foo, read=" + read); + else + System.out.println("match, w00t"); + } catch (Exception e) { e.printStackTrace(); } + } + + private static boolean test(int size) { + byte b[] = new byte[size]; + new java.util.Random().nextBytes(b); + try { + ReusableGZIPOutputStream o = ReusableGZIPOutputStream.acquire(); + o.write(b); + o.finish(); + o.flush(); + byte compressed[] = o.getData(); + ReusableGZIPOutputStream.release(o); + + GZIPInputStream in = new GZIPInputStream(new ByteArrayInputStream(compressed)); + ByteArrayOutputStream baos2 = new ByteArrayOutputStream(256*1024); + byte rbuf[] = new byte[128]; + while (true) { + int read = in.read(rbuf); + if (read == -1) + break; + baos2.write(rbuf, 0, read); + } + byte rv[] = baos2.toByteArray(); + if (!DataHelper.eq(rv, 0, b, 0, b.length)) { + throw new RuntimeException("foo, read=" + rv.length); + } else { + System.out.println("match, w00t @ " + size); + return true; + } + } catch (Exception e) { + System.out.println("Error on size=" + size + ": " + e.getMessage()); + e.printStackTrace(); + return false; + } + } +} + diff --git a/src/net/i2p/util/SimpleTimer.java b/src/net/i2p/util/SimpleTimer.java new file mode 100644 index 0000000..c86b137 --- /dev/null +++ b/src/net/i2p/util/SimpleTimer.java @@ -0,0 +1,264 @@ +package net.i2p.util; + +import java.util.ArrayList; +import java.util.Iterator; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.TreeMap; + +import net.i2p.I2PAppContext; + +/** + * Simple event scheduler - toss an event on the queue and it gets fired at the + * appropriate time. The method that is fired however should NOT block (otherwise + * they b0rk the timer). + * + */ +public class SimpleTimer { + private static final SimpleTimer _instance = new SimpleTimer(); + public static SimpleTimer getInstance() { return _instance; } + private I2PAppContext _context; + private Log _log; + /** event time (Long) to event (TimedEvent) mapping */ + private TreeMap _events; + /** event (TimedEvent) to event time (Long) mapping */ + private Map _eventTimes; + private List _readyEvents; + + protected SimpleTimer() { this("SimpleTimer"); } + protected SimpleTimer(String name) { + _context = I2PAppContext.getGlobalContext(); + _log = _context.logManager().getLog(SimpleTimer.class); + _events = new TreeMap(); + _eventTimes = new HashMap(256); + _readyEvents = new ArrayList(4); + I2PThread runner = new I2PThread(new SimpleTimerRunner()); + runner.setName(name); + runner.setDaemon(true); + runner.start(); + for (int i = 0; i < 3; i++) { + I2PThread executor = new I2PThread(new Executor(_context, _log, _readyEvents)); + executor.setName(name + "Executor " + i); + executor.setDaemon(true); + executor.start(); + } + } + + public void reschedule(TimedEvent event, long timeoutMs) { + addEvent(event, timeoutMs, false); + } + + /** + * Queue up the given event to be fired no sooner than timeoutMs from now. + * However, if this event is already scheduled, the event will be scheduled + * for the earlier of the two timeouts, which may be before this stated + * timeout. If this is not the desired behavior, call removeEvent first. + * + */ + public void addEvent(TimedEvent event, long timeoutMs) { addEvent(event, timeoutMs, true); } + /** + * @param useEarliestTime if its already scheduled, use the earlier of the + * two timeouts, else use the later + */ + public void addEvent(TimedEvent event, long timeoutMs, boolean useEarliestTime) { + int totalEvents = 0; + long now = System.currentTimeMillis(); + long eventTime = now + timeoutMs; + Long time = new Long(eventTime); + synchronized (_events) { + // remove the old scheduled position, then reinsert it + Long oldTime = (Long)_eventTimes.get(event); + if (oldTime != null) { + if (useEarliestTime) { + if (oldTime.longValue() < eventTime) { + _events.notifyAll(); + return; // already scheduled for sooner than requested + } else { + _events.remove(oldTime); + } + } else { + if (oldTime.longValue() > eventTime) { + _events.notifyAll(); + return; // already scheduled for later than the given period + } else { + _events.remove(oldTime); + } + } + } + while (_events.containsKey(time)) + time = new Long(time.longValue() + 1); + _events.put(time, event); + _eventTimes.put(event, time); + + if ( (_events.size() != _eventTimes.size()) ) { + _log.error("Skewed events: " + _events.size() + " for " + _eventTimes.size()); + for (Iterator iter = _eventTimes.keySet().iterator(); iter.hasNext(); ) { + TimedEvent evt = (TimedEvent)iter.next(); + Long when = (Long)_eventTimes.get(evt); + TimedEvent cur = (TimedEvent)_events.get(when); + if (cur != evt) { + _log.error("event " + evt + " @ " + when + ": " + cur); + } + } + } + + totalEvents = _events.size(); + _events.notifyAll(); + } + if (time.longValue() > eventTime + 100) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Lots of timer congestion, had to push " + event + " back " + + (time.longValue()-eventTime) + "ms (# events: " + totalEvents + ")"); + } + long timeToAdd = System.currentTimeMillis() - now; + if (timeToAdd > 50) { + if (_log.shouldLog(Log.WARN)) + _log.warn("timer contention: took " + timeToAdd + "ms to add a job with " + totalEvents + " queued"); + } + + } + + public boolean removeEvent(TimedEvent evt) { + if (evt == null) return false; + synchronized (_events) { + Long when = (Long)_eventTimes.remove(evt); + if (when != null) + _events.remove(when); + return null != when; + } + } + + /** + * Simple interface for events to be queued up and notified on expiration + */ + public interface TimedEvent { + /** + * the time requested has been reached (this call should NOT block, + * otherwise the whole SimpleTimer gets backed up) + * + */ + public void timeReached(); + } + + private long _occurredTime; + private long _occurredEventCount; + private TimedEvent _recentEvents[] = new TimedEvent[5]; + + private class SimpleTimerRunner implements Runnable { + public void run() { + List eventsToFire = new ArrayList(1); + while (true) { + try { + synchronized (_events) { + //if (_events.size() <= 0) + // _events.wait(); + //if (_events.size() > 100) + // _log.warn("> 100 events! " + _events.values()); + long now = System.currentTimeMillis(); + long nextEventDelay = -1; + Object nextEvent = null; + while (true) { + if (_events.size() <= 0) break; + Long when = (Long)_events.firstKey(); + if (when.longValue() <= now) { + TimedEvent evt = (TimedEvent)_events.remove(when); + if (evt != null) { + _eventTimes.remove(evt); + eventsToFire.add(evt); + } + } else { + nextEventDelay = when.longValue() - now; + nextEvent = _events.get(when); + break; + } + } + if (eventsToFire.size() <= 0) { + if (nextEventDelay != -1) { + if (_log.shouldLog(Log.DEBUG)) + _log.debug("Next event in " + nextEventDelay + ": " + nextEvent); + _events.wait(nextEventDelay); + } else { + _events.wait(); + } + } + } + } catch (InterruptedException ie) { + // ignore + } catch (Throwable t) { + if (_log != null) { + _log.log(Log.CRIT, "Uncaught exception in the SimpleTimer!", t); + } else { + System.err.println("Uncaught exception in SimpleTimer"); + t.printStackTrace(); + } + } + + long now = System.currentTimeMillis(); + now = now - (now % 1000); + + synchronized (_readyEvents) { + for (int i = 0; i < eventsToFire.size(); i++) + _readyEvents.add(eventsToFire.get(i)); + _readyEvents.notifyAll(); + } + + if (_occurredTime == now) { + _occurredEventCount += eventsToFire.size(); + } else { + _occurredTime = now; + if (_occurredEventCount > 1000) { + StringBuffer buf = new StringBuffer(128); + buf.append("Too many simpleTimerJobs (").append(_occurredEventCount); + buf.append(") in a second!"); + _log.log(Log.CRIT, buf.toString()); + } + _occurredEventCount = 0; + } + + eventsToFire.clear(); + } + } + } +} + +class Executor implements Runnable { + private I2PAppContext _context; + private Log _log; + private List _readyEvents; + public Executor(I2PAppContext ctx, Log log, List events) { + _context = ctx; + _readyEvents = events; + } + public void run() { + while (true) { + SimpleTimer.TimedEvent evt = null; + synchronized (_readyEvents) { + if (_readyEvents.size() <= 0) + try { _readyEvents.wait(); } catch (InterruptedException ie) {} + if (_readyEvents.size() > 0) + evt = (SimpleTimer.TimedEvent)_readyEvents.remove(0); + } + + if (evt != null) { + long before = _context.clock().now(); + try { + evt.timeReached(); + } catch (Throwable t) { + log("wtf, event borked: " + evt, t); + } + long time = _context.clock().now() - before; + if ( (time > 1000) && (_log != null) && (_log.shouldLog(Log.WARN)) ) + _log.warn("wtf, event execution took " + time + ": " + evt); + } + } + } + + private void log(String msg, Throwable t) { + synchronized (this) { + if (_log == null) + _log = I2PAppContext.getGlobalContext().logManager().getLog(SimpleTimer.class); + } + _log.log(Log.CRIT, msg, t); + } +} diff --git a/src/org/bouncycastle/bc_license.txt b/src/org/bouncycastle/bc_license.txt new file mode 100644 index 0000000..1eb884a --- /dev/null +++ b/src/org/bouncycastle/bc_license.txt @@ -0,0 +1,26 @@ +/* + * Copyright (c) 2000 - 2004 The Legion Of The Bouncy Castle + * (http://www.bouncycastle.org) + * + * Permission is hereby granted, free of charge, to any person + * obtaining a copy of this software and associated + * documentation files (the "Software"), to deal in the Software + * without restriction, including without limitation the rights to + * use, copy, modify, merge, publish, distribute, sublicense, and/or + * sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following + * conditions: + * + * The above copyright notice and this permission notice shall be + * included in all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES + * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + */ diff --git a/src/org/bouncycastle/crypto/Digest.java b/src/org/bouncycastle/crypto/Digest.java new file mode 100644 index 0000000..7ffa239 --- /dev/null +++ b/src/org/bouncycastle/crypto/Digest.java @@ -0,0 +1,77 @@ +package org.bouncycastle.crypto; +/* + * Copyright (c) 2000 - 2004 The Legion Of The Bouncy Castle + * (http://www.bouncycastle.org) + * + * Permission is hereby granted, free of charge, to any person + * obtaining a copy of this software and associated + * documentation files (the "Software"), to deal in the Software + * without restriction, including without limitation the rights to + * use, copy, modify, merge, publish, distribute, sublicense, and/or + * sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following + * conditions: + * + * The above copyright notice and this permission notice shall be + * included in all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES + * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + */ + +/** + * interface that a message digest conforms to. + */ +public interface Digest +{ + /** + * return the algorithm name + * + * @return the algorithm name + */ + public String getAlgorithmName(); + + /** + * return the size, in bytes, of the digest produced by this message digest. + * + * @return the size, in bytes, of the digest produced by this message digest. + */ + public int getDigestSize(); + + /** + * update the message digest with a single byte. + * + * @param in the input byte to be entered. + */ + public void update(byte in); + + /** + * update the message digest with a block of bytes. + * + * @param in the byte array containing the data. + * @param inOff the offset into the byte array where the data starts. + * @param len the length of the data. + */ + public void update(byte[] in, int inOff, int len); + + /** + * close the digest, producing the final digest value. The doFinal + * call leaves the digest reset. + * + * @param out the array the digest is to be copied into. + * @param outOff the offset into the out array the digest is to start at. + */ + public int doFinal(byte[] out, int outOff); + + /** + * reset the digest back to it's initial state. + */ + public void reset(); +} diff --git a/src/org/bouncycastle/crypto/Mac.java b/src/org/bouncycastle/crypto/Mac.java new file mode 100644 index 0000000..336f883 --- /dev/null +++ b/src/org/bouncycastle/crypto/Mac.java @@ -0,0 +1,97 @@ +package org.bouncycastle.crypto; +/* + * Copyright (c) 2000 - 2004 The Legion Of The Bouncy Castle + * (http://www.bouncycastle.org) + * + * Permission is hereby granted, free of charge, to any person + * obtaining a copy of this software and associated + * documentation files (the "Software"), to deal in the Software + * without restriction, including without limitation the rights to + * use, copy, modify, merge, publish, distribute, sublicense, and/or + * sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following + * conditions: + * + * The above copyright notice and this permission notice shall be + * included in all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES + * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + */ + + +/** + * The base interface for implementations of message authentication codes (MACs). + * + * modified by jrandom to use the session key byte array directly + */ +public interface Mac +{ + /** + * Initialise the MAC. + * + * @param key the key required by the MAC. + * @exception IllegalArgumentException if the params argument is + * inappropriate. + */ + public void init(byte key[]) + throws IllegalArgumentException; + + /** + * Return the name of the algorithm the MAC implements. + * + * @return the name of the algorithm the MAC implements. + */ + public String getAlgorithmName(); + + /** + * Return the block size for this cipher (in bytes). + * + * @return the block size for this cipher in bytes. + */ + public int getMacSize(); + + /** + * add a single byte to the mac for processing. + * + * @param in the byte to be processed. + * @exception IllegalStateException if the MAC is not initialised. + */ + public void update(byte in) + throws IllegalStateException; + + /** + * @param in the array containing the input. + * @param inOff the index in the array the data begins at. + * @param len the length of the input starting at inOff. + * @exception IllegalStateException if the MAC is not initialised. + */ + public void update(byte[] in, int inOff, int len) + throws IllegalStateException; + + /** + * Compute the final statge of the MAC writing the output to the out + * parameter. + *+ * doFinal leaves the MAC in the same state it was after the last init. + * + * @param out the array the MAC is to be output to. + * @param outOff the offset into the out buffer the output is to start at. + * @exception IllegalStateException if the MAC is not initialised. + */ + public int doFinal(byte[] out, int outOff) + throws IllegalStateException; + + /** + * Reset the MAC. At the end of resetting the MAC should be in the + * in the same state it was after the last init (if there was one). + */ + public void reset(); +} diff --git a/src/org/bouncycastle/crypto/digests/GeneralDigest.java b/src/org/bouncycastle/crypto/digests/GeneralDigest.java new file mode 100644 index 0000000..09b72f9 --- /dev/null +++ b/src/org/bouncycastle/crypto/digests/GeneralDigest.java @@ -0,0 +1,154 @@ +package org.bouncycastle.crypto.digests; +/* + * Copyright (c) 2000 - 2004 The Legion Of The Bouncy Castle + * (http://www.bouncycastle.org) + * + * Permission is hereby granted, free of charge, to any person + * obtaining a copy of this software and associated + * documentation files (the "Software"), to deal in the Software + * without restriction, including without limitation the rights to + * use, copy, modify, merge, publish, distribute, sublicense, and/or + * sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following + * conditions: + * + * The above copyright notice and this permission notice shall be + * included in all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES + * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + */ + +import org.bouncycastle.crypto.Digest; + +/** + * base implementation of MD4 family style digest as outlined in + * "Handbook of Applied Cryptography", pages 344 - 347. + */ +public abstract class GeneralDigest + implements Digest +{ + private byte[] xBuf; + private int xBufOff; + + private long byteCount; + + /** + * Standard constructor + */ + protected GeneralDigest() + { + xBuf = new byte[4]; + xBufOff = 0; + } + + /** + * Copy constructor. We are using copy constructors in place + * of the Object.clone() interface as this interface is not + * supported by J2ME. + */ + protected GeneralDigest(GeneralDigest t) + { + xBuf = new byte[t.xBuf.length]; + System.arraycopy(t.xBuf, 0, xBuf, 0, t.xBuf.length); + + xBufOff = t.xBufOff; + byteCount = t.byteCount; + } + + public void update( + byte in) + { + xBuf[xBufOff++] = in; + + if (xBufOff == xBuf.length) + { + processWord(xBuf, 0); + xBufOff = 0; + } + + byteCount++; + } + + public void update( + byte[] in, + int inOff, + int len) + { + // + // fill the current word + // + while ((xBufOff != 0) && (len > 0)) + { + update(in[inOff]); + + inOff++; + len--; + } + + // + // process whole words. + // + while (len > xBuf.length) + { + processWord(in, inOff); + + inOff += xBuf.length; + len -= xBuf.length; + byteCount += xBuf.length; + } + + // + // load in the remainder. + // + while (len > 0) + { + update(in[inOff]); + + inOff++; + len--; + } + } + + public void finish() + { + long bitLength = (byteCount << 3); + + // + // add the pad bytes. + // + update((byte)128); + + while (xBufOff != 0) + { + update((byte)0); + } + + processLength(bitLength); + + processBlock(); + } + + public void reset() + { + byteCount = 0; + + xBufOff = 0; + for ( int i = 0; i < xBuf.length; i++ ) { + xBuf[i] = 0; + } + } + + protected abstract void processWord(byte[] in, int inOff); + + protected abstract void processLength(long bitLength); + + protected abstract void processBlock(); +} diff --git a/src/org/bouncycastle/crypto/digests/MD5Digest.java b/src/org/bouncycastle/crypto/digests/MD5Digest.java new file mode 100644 index 0000000..cd434b9 --- /dev/null +++ b/src/org/bouncycastle/crypto/digests/MD5Digest.java @@ -0,0 +1,302 @@ +package org.bouncycastle.crypto.digests; + + +/** + * implementation of MD5 as outlined in "Handbook of Applied Cryptography", pages 346 - 347. + */ +public class MD5Digest + extends GeneralDigest +{ + private static final int DIGEST_LENGTH = 16; + + private int H1, H2, H3, H4; // IV's + + private int[] X = new int[16]; + private int xOff; + + /** + * Standard constructor + */ + public MD5Digest() + { + reset(); + } + + /** + * Copy constructor. This will copy the state of the provided + * message digest. + */ + public MD5Digest(MD5Digest t) + { + super(t); + + H1 = t.H1; + H2 = t.H2; + H3 = t.H3; + H4 = t.H4; + + System.arraycopy(t.X, 0, X, 0, t.X.length); + xOff = t.xOff; + } + + public String getAlgorithmName() + { + return "MD5"; + } + + public int getDigestSize() + { + return DIGEST_LENGTH; + } + + protected void processWord( + byte[] in, + int inOff) + { + X[xOff++] = (in[inOff] & 0xff) | ((in[inOff + 1] & 0xff) << 8) + | ((in[inOff + 2] & 0xff) << 16) | ((in[inOff + 3] & 0xff) << 24); + + if (xOff == 16) + { + processBlock(); + } + } + + protected void processLength( + long bitLength) + { + if (xOff > 14) + { + processBlock(); + } + + X[14] = (int)(bitLength & 0xffffffff); + X[15] = (int)(bitLength >>> 32); + } + + private void unpackWord( + int word, + byte[] out, + int outOff) + { + out[outOff] = (byte)word; + out[outOff + 1] = (byte)(word >>> 8); + out[outOff + 2] = (byte)(word >>> 16); + out[outOff + 3] = (byte)(word >>> 24); + } + + public int doFinal( + byte[] out, + int outOff) + { + finish(); + + unpackWord(H1, out, outOff); + unpackWord(H2, out, outOff + 4); + unpackWord(H3, out, outOff + 8); + unpackWord(H4, out, outOff + 12); + + reset(); + + return DIGEST_LENGTH; + } + + /** + * reset the chaining variables to the IV values. + */ + public void reset() + { + super.reset(); + + H1 = 0x67452301; + H2 = 0xefcdab89; + H3 = 0x98badcfe; + H4 = 0x10325476; + + xOff = 0; + + for (int i = 0; i != X.length; i++) + { + X[i] = 0; + } + } + + // + // round 1 left rotates + // + private static final int S11 = 7; + private static final int S12 = 12; + private static final int S13 = 17; + private static final int S14 = 22; + + // + // round 2 left rotates + // + private static final int S21 = 5; + private static final int S22 = 9; + private static final int S23 = 14; + private static final int S24 = 20; + + // + // round 3 left rotates + // + private static final int S31 = 4; + private static final int S32 = 11; + private static final int S33 = 16; + private static final int S34 = 23; + + // + // round 4 left rotates + // + private static final int S41 = 6; + private static final int S42 = 10; + private static final int S43 = 15; + private static final int S44 = 21; + + /* + * rotate int x left n bits. + */ + private int rotateLeft( + int x, + int n) + { + return (x << n) | (x >>> (32 - n)); + } + + /* + * F, G, H and I are the basic MD5 functions. + */ + private int F( + int u, + int v, + int w) + { + return (u & v) | (~u & w); + } + + private int G( + int u, + int v, + int w) + { + return (u & w) | (v & ~w); + } + + private int H( + int u, + int v, + int w) + { + return u ^ v ^ w; + } + + private int K( + int u, + int v, + int w) + { + return v ^ (u | ~w); + } + + protected void processBlock() + { + int a = H1; + int b = H2; + int c = H3; + int d = H4; + + // + // Round 1 - F cycle, 16 times. + // + a = rotateLeft((a + F(b, c, d) + X[ 0] + 0xd76aa478), S11) + b; + d = rotateLeft((d + F(a, b, c) + X[ 1] + 0xe8c7b756), S12) + a; + c = rotateLeft((c + F(d, a, b) + X[ 2] + 0x242070db), S13) + d; + b = rotateLeft((b + F(c, d, a) + X[ 3] + 0xc1bdceee), S14) + c; + a = rotateLeft((a + F(b, c, d) + X[ 4] + 0xf57c0faf), S11) + b; + d = rotateLeft((d + F(a, b, c) + X[ 5] + 0x4787c62a), S12) + a; + c = rotateLeft((c + F(d, a, b) + X[ 6] + 0xa8304613), S13) + d; + b = rotateLeft((b + F(c, d, a) + X[ 7] + 0xfd469501), S14) + c; + a = rotateLeft((a + F(b, c, d) + X[ 8] + 0x698098d8), S11) + b; + d = rotateLeft((d + F(a, b, c) + X[ 9] + 0x8b44f7af), S12) + a; + c = rotateLeft((c + F(d, a, b) + X[10] + 0xffff5bb1), S13) + d; + b = rotateLeft((b + F(c, d, a) + X[11] + 0x895cd7be), S14) + c; + a = rotateLeft((a + F(b, c, d) + X[12] + 0x6b901122), S11) + b; + d = rotateLeft((d + F(a, b, c) + X[13] + 0xfd987193), S12) + a; + c = rotateLeft((c + F(d, a, b) + X[14] + 0xa679438e), S13) + d; + b = rotateLeft((b + F(c, d, a) + X[15] + 0x49b40821), S14) + c; + + // + // Round 2 - G cycle, 16 times. + // + a = rotateLeft((a + G(b, c, d) + X[ 1] + 0xf61e2562), S21) + b; + d = rotateLeft((d + G(a, b, c) + X[ 6] + 0xc040b340), S22) + a; + c = rotateLeft((c + G(d, a, b) + X[11] + 0x265e5a51), S23) + d; + b = rotateLeft((b + G(c, d, a) + X[ 0] + 0xe9b6c7aa), S24) + c; + a = rotateLeft((a + G(b, c, d) + X[ 5] + 0xd62f105d), S21) + b; + d = rotateLeft((d + G(a, b, c) + X[10] + 0x02441453), S22) + a; + c = rotateLeft((c + G(d, a, b) + X[15] + 0xd8a1e681), S23) + d; + b = rotateLeft((b + G(c, d, a) + X[ 4] + 0xe7d3fbc8), S24) + c; + a = rotateLeft((a + G(b, c, d) + X[ 9] + 0x21e1cde6), S21) + b; + d = rotateLeft((d + G(a, b, c) + X[14] + 0xc33707d6), S22) + a; + c = rotateLeft((c + G(d, a, b) + X[ 3] + 0xf4d50d87), S23) + d; + b = rotateLeft((b + G(c, d, a) + X[ 8] + 0x455a14ed), S24) + c; + a = rotateLeft((a + G(b, c, d) + X[13] + 0xa9e3e905), S21) + b; + d = rotateLeft((d + G(a, b, c) + X[ 2] + 0xfcefa3f8), S22) + a; + c = rotateLeft((c + G(d, a, b) + X[ 7] + 0x676f02d9), S23) + d; + b = rotateLeft((b + G(c, d, a) + X[12] + 0x8d2a4c8a), S24) + c; + + // + // Round 3 - H cycle, 16 times. + // + a = rotateLeft((a + H(b, c, d) + X[ 5] + 0xfffa3942), S31) + b; + d = rotateLeft((d + H(a, b, c) + X[ 8] + 0x8771f681), S32) + a; + c = rotateLeft((c + H(d, a, b) + X[11] + 0x6d9d6122), S33) + d; + b = rotateLeft((b + H(c, d, a) + X[14] + 0xfde5380c), S34) + c; + a = rotateLeft((a + H(b, c, d) + X[ 1] + 0xa4beea44), S31) + b; + d = rotateLeft((d + H(a, b, c) + X[ 4] + 0x4bdecfa9), S32) + a; + c = rotateLeft((c + H(d, a, b) + X[ 7] + 0xf6bb4b60), S33) + d; + b = rotateLeft((b + H(c, d, a) + X[10] + 0xbebfbc70), S34) + c; + a = rotateLeft((a + H(b, c, d) + X[13] + 0x289b7ec6), S31) + b; + d = rotateLeft((d + H(a, b, c) + X[ 0] + 0xeaa127fa), S32) + a; + c = rotateLeft((c + H(d, a, b) + X[ 3] + 0xd4ef3085), S33) + d; + b = rotateLeft((b + H(c, d, a) + X[ 6] + 0x04881d05), S34) + c; + a = rotateLeft((a + H(b, c, d) + X[ 9] + 0xd9d4d039), S31) + b; + d = rotateLeft((d + H(a, b, c) + X[12] + 0xe6db99e5), S32) + a; + c = rotateLeft((c + H(d, a, b) + X[15] + 0x1fa27cf8), S33) + d; + b = rotateLeft((b + H(c, d, a) + X[ 2] + 0xc4ac5665), S34) + c; + + // + // Round 4 - K cycle, 16 times. + // + a = rotateLeft((a + K(b, c, d) + X[ 0] + 0xf4292244), S41) + b; + d = rotateLeft((d + K(a, b, c) + X[ 7] + 0x432aff97), S42) + a; + c = rotateLeft((c + K(d, a, b) + X[14] + 0xab9423a7), S43) + d; + b = rotateLeft((b + K(c, d, a) + X[ 5] + 0xfc93a039), S44) + c; + a = rotateLeft((a + K(b, c, d) + X[12] + 0x655b59c3), S41) + b; + d = rotateLeft((d + K(a, b, c) + X[ 3] + 0x8f0ccc92), S42) + a; + c = rotateLeft((c + K(d, a, b) + X[10] + 0xffeff47d), S43) + d; + b = rotateLeft((b + K(c, d, a) + X[ 1] + 0x85845dd1), S44) + c; + a = rotateLeft((a + K(b, c, d) + X[ 8] + 0x6fa87e4f), S41) + b; + d = rotateLeft((d + K(a, b, c) + X[15] + 0xfe2ce6e0), S42) + a; + c = rotateLeft((c + K(d, a, b) + X[ 6] + 0xa3014314), S43) + d; + b = rotateLeft((b + K(c, d, a) + X[13] + 0x4e0811a1), S44) + c; + a = rotateLeft((a + K(b, c, d) + X[ 4] + 0xf7537e82), S41) + b; + d = rotateLeft((d + K(a, b, c) + X[11] + 0xbd3af235), S42) + a; + c = rotateLeft((c + K(d, a, b) + X[ 2] + 0x2ad7d2bb), S43) + d; + b = rotateLeft((b + K(c, d, a) + X[ 9] + 0xeb86d391), S44) + c; + + H1 += a; + H2 += b; + H3 += c; + H4 += d; + + // + // reset the offset and clean out the word buffer. + // + xOff = 0; + for (int i = 0; i != X.length; i++) + { + X[i] = 0; + } + } +} diff --git a/src/org/bouncycastle/crypto/macs/HMac.java b/src/org/bouncycastle/crypto/macs/HMac.java new file mode 100644 index 0000000..e43e80c --- /dev/null +++ b/src/org/bouncycastle/crypto/macs/HMac.java @@ -0,0 +1,203 @@ +package org.bouncycastle.crypto.macs; +/* + * Copyright (c) 2000 - 2004 The Legion Of The Bouncy Castle + * (http://www.bouncycastle.org) + * + * Permission is hereby granted, free of charge, to any person + * obtaining a copy of this software and associated + * documentation files (the "Software"), to deal in the Software + * without restriction, including without limitation the rights to + * use, copy, modify, merge, publish, distribute, sublicense, and/or + * sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following + * conditions: + * + * The above copyright notice and this permission notice shall be + * included in all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES + * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + */ + +//import org.bouncycastle.crypto.CipherParameters; +import org.bouncycastle.crypto.Digest; +import org.bouncycastle.crypto.Mac; +//import org.bouncycastle.crypto.params.KeyParameter; +import java.util.Arrays; +import java.util.ArrayList; + +/** + * HMAC implementation based on RFC2104 + * + * H(K XOR opad, H(K XOR ipad, text)) + * + * modified by jrandom to use the session key byte array directly and to cache + * a frequently used buffer (called on doFinal). changes released into the public + * domain in 2005. + * + */ +public class HMac +implements Mac +{ + private final static int BLOCK_LENGTH = 64; + + private final static byte IPAD = (byte)0x36; + private final static byte OPAD = (byte)0x5C; + + private Digest digest; + private int digestSize; + private byte[] inputPad = new byte[BLOCK_LENGTH]; + private byte[] outputPad = new byte[BLOCK_LENGTH]; + + public HMac( + Digest digest) + { + this(digest, digest.getDigestSize()); + } + public HMac( + Digest digest, int sz) + { + this.digest = digest; + this.digestSize = sz; + } + + public String getAlgorithmName() + { + return digest.getAlgorithmName() + "/HMAC"; + } + + public Digest getUnderlyingDigest() + { + return digest; + } + + //public void init( + // CipherParameters params) + //{ + public void init(byte key[]) + { + digest.reset(); + + //byte[] key = ((KeyParameter)params).getKey(); + + if (key.length > BLOCK_LENGTH) + { + digest.update(key, 0, key.length); + digest.doFinal(inputPad, 0); + for (int i = digestSize; i < inputPad.length; i++) + { + inputPad[i] = 0; + } + } + else + { + System.arraycopy(key, 0, inputPad, 0, key.length); + for (int i = key.length; i < inputPad.length; i++) + { + inputPad[i] = 0; + } + } + + // why reallocate? it hasn't changed sizes, and the arraycopy + // below fills it completely... + //outputPad = new byte[inputPad.length]; + System.arraycopy(inputPad, 0, outputPad, 0, inputPad.length); + + for (int i = 0; i < inputPad.length; i++) + { + inputPad[i] ^= IPAD; + } + + for (int i = 0; i < outputPad.length; i++) + { + outputPad[i] ^= OPAD; + } + + digest.update(inputPad, 0, inputPad.length); + } + + public int getMacSize() + { + return digestSize; + } + + public void update( + byte in) + { + digest.update(in); + } + + public void update( + byte[] in, + int inOff, + int len) + { + digest.update(in, inOff, len); + } + + public int doFinal( + byte[] out, + int outOff) + { + byte[] tmp = acquireTmp(digestSize); + //byte[] tmp = new byte[digestSize]; + digest.doFinal(tmp, 0); + + digest.update(outputPad, 0, outputPad.length); + digest.update(tmp, 0, tmp.length); + releaseTmp(tmp); + + int len = digest.doFinal(out, outOff); + + reset(); + + return len; + } + + /** + * list of buffers - index 0 is the cache for 32 byte arrays, while index 1 is the cache for 16 byte arrays + */ + private static ArrayList _tmpBuf[] = new ArrayList[] { new ArrayList(), new ArrayList() }; + private static byte[] acquireTmp(int sz) { + byte rv[] = null; + synchronized (_tmpBuf[sz == 32 ? 0 : 1]) { + if (_tmpBuf[sz == 32 ? 0 : 1].size() > 0) + rv = (byte[])_tmpBuf[sz == 32 ? 0 : 1].remove(0); + } + if (rv != null) + Arrays.fill(rv, (byte)0x0); + else + rv = new byte[sz]; + return rv; + } + private static void releaseTmp(byte buf[]) { + if (buf == null) return; + synchronized (_tmpBuf[buf.length == 32 ? 0 : 1]) { + if (_tmpBuf[buf.length == 32 ? 0 : 1].size() < 100) + _tmpBuf[buf.length == 32 ? 0 : 1].add((Object)buf); + } + } + + /** + * Reset the mac generator. + */ + public void reset() + { + /* + * reset the underlying digest. + */ + digest.reset(); + + /* + * reinitialize the digest. + */ + digest.update(inputPad, 0, inputPad.length); + } +} diff --git a/src/org/hsqldb/GCJKludge.java b/src/org/hsqldb/GCJKludge.java new file mode 100644 index 0000000..4a3ae95 --- /dev/null +++ b/src/org/hsqldb/GCJKludge.java @@ -0,0 +1,10 @@ +package org.hsqldb; + +public class GCJKludge { + public static final Class _kludge[] = { + org.hsqldb.DatabaseInformationFull.class + , org.hsqldb.DatabaseInformationMain.class + //, org.hsqldb.HsqlSocketFactorySecure.class // removed for gcj 3.4 support + , org.hsqldb.Library.class + }; +} diff --git a/src/org/hsqldb/persist/GCJKludge.java b/src/org/hsqldb/persist/GCJKludge.java new file mode 100644 index 0000000..22c29c4 --- /dev/null +++ b/src/org/hsqldb/persist/GCJKludge.java @@ -0,0 +1,10 @@ +package org.hsqldb.persist; + +public class GCJKludge { + public static final Class _kludge[] = { +// org.hsqldb.persist.NIOScaledRAFile.class +// , + //org.hsqldb.persist.NIOLockFile.class + java.nio.MappedByteBuffer.class + }; +} diff --git a/src/syndie/Constants.java b/src/syndie/Constants.java new file mode 100644 index 0000000..39c445b --- /dev/null +++ b/src/syndie/Constants.java @@ -0,0 +1,135 @@ +package syndie; + +import java.util.*; + +/** + * ugly centralized place to put shared constants. who needs ooad? + */ +public class Constants { + /** header line in the enclosure before the body specifying the body size */ + public static final String MSG_HEADER_SIZE = "Size"; + + /** first line of the enclosure must start with this prefix for it to be supported */ + public static final String TYPE_PREFIX = "Syndie.Message.1."; + /** the type line we use when we can choose */ + public static final String TYPE_CURRENT = TYPE_PREFIX + "0"; + + /** what type of message is it? */ + public static final String MSG_HEADER_TYPE = "Syndie.MessageType"; + + /** msg_header_type value for normal content-bearing posts */ + public static final String MSG_TYPE_POST = "post"; + /** msg_header_type value for posts updating channel metadata */ + public static final String MSG_TYPE_META = "meta"; + /** msg_header_type value for posts encrypted to the channel reply key */ + public static final String MSG_TYPE_REPLY = "reply"; + + public static final String MSG_META_HEADER_IDENTITY = "Identity"; + public static final String MSG_META_HEADER_MANAGER_KEYS = "ManagerKeys"; + public static final String MSG_META_HEADER_POST_KEYS = "AuthorizedKeys"; + public static final String MSG_META_HEADER_EDITION = "Edition"; + public static final String MSG_META_HEADER_ENCRYPTKEY = "EncryptKey"; + public static final String MSG_META_HEADER_NAME = "Name"; + public static final String MSG_META_HEADER_DESCRIPTION = "Description"; + public static final String MSG_META_HEADER_PUBLICPOSTING = "PublicPosting"; + public static final String MSG_META_HEADER_PUBLICREPLY = "PublicReplies"; + public static final String MSG_META_HEADER_TAGS = "Tags"; + public static final String MSG_META_HEADER_ARCHIVES = "Archives"; + public static final String MSG_META_HEADER_READKEYS = "ChannelReadKeys"; + + public static final String MSG_HEADER_BODYKEY = "BodyKey"; + /** + * if specified, the answer to the given question is fed into the password-based-encryption + * algorithm to derive the body's read key + */ + public static final String MSG_HEADER_PBE_PROMPT = "BodyKeyPrompt"; + public static final String MSG_HEADER_PBE_PROMPT_SALT = "BodyKeyPromptSalt"; + + /** URI the message is posted under */ + public static final String MSG_HEADER_POST_URI = "PostURI"; + /** + * in case the channel in the postURI is not the channel that the post should + * be displayed in (eg an unauthorized post, or a reply) + */ + public static final String MSG_HEADER_TARGET_CHANNEL = "TargetChannel"; + /** tab delimited list of URIs the message is in reply to, most recent first */ + public static final String MSG_HEADER_REFERENCES = "References"; + /** URI the post is supposed to replace */ + public static final String MSG_HEADER_OVERWRITE = "Overwrite"; + /** If true, act as if this is the beginning of a new discussion thread */ + public static final String MSG_HEADER_FORCE_NEW_THREAD = "ForceNewThread"; + /** If true, only allow the poster to reply to the message */ + public static final String MSG_HEADER_REFUSE_REPLIES = "RefuseReplies"; + /** list of posts to be cancelled (if authorized) */ + public static final String MSG_HEADER_CANCEL = "Cancel"; + /** post subject */ + public static final String MSG_HEADER_SUBJECT = "Subject"; + /** suggested post expiration */ + public static final String MSG_HEADER_EXPIRATION = "Expiration"; + /** for multiauthor channels, we specify what nym we are authenticating ourselves with in the headers */ + public static final String MSG_HEADER_AUTHOR = "Author"; + /** + * if we are hiding what nym posted the message inside the headers, xor the + * actual authentication signature with this random AuthenticationMask to prevent + * confirmation attacks + */ + public static final String MSG_HEADER_AUTHENTICATION_MASK = "AuthenticationMask"; + + /** key can be used to read posts to a channel or its encrypted metadata */ + public static final String KEY_FUNCTION_READ = "read"; + /** key can be used to post metadata messages, etc */ + public static final String KEY_FUNCTION_MANAGE = "manage"; + /** key can be used to decrypt replies to a channel */ + public static final String KEY_FUNCTION_REPLY = "reply"; + /** key can be used to authorize normal posts without the poster necessarily authenticating themselves */ + public static final String KEY_FUNCTION_POST = "post"; + public static final String KEY_TYPE_AES256 = "AES256"; + public static final String KEY_TYPE_DSA = "DSA"; + public static final String KEY_TYPE_ELGAMAL2048 = "ELGAMAL2048"; + + public static final Boolean DEFAULT_ALLOW_PUBLIC_POSTS = Boolean.FALSE; + public static final Boolean DEFAULT_ALLOW_PUBLIC_REPLIES = Boolean.FALSE; + + public static final String MSG_PAGE_CONTENT_TYPE = "Content-type"; + public static final String MSG_ATTACH_CONTENT_TYPE = "Content-type"; + public static final String MSG_ATTACH_NAME = "Name"; + public static final String MSG_ATTACH_DESCRIPTION = "Description"; + public static final String MSG_HEADER_TAGS = "Tags"; + + public static final int MAX_AVATAR_SIZE = 16*1024; + + public static final String FILENAME_SUFFIX = ".syndie"; + + + public static final String[] split(char elem, String orig) { + List vals = new ArrayList(); + int off = 0; + int start = 0; + char str[] = orig.toCharArray(); + while (off < str.length) { + if (str[off] == elem) { + if (off-start > 0) { + vals.add(new String(str, start, off-start)); + } else { + vals.add(new String("")); + } + start = off+1; + } + off++; + } + if (off-start > 0) + vals.add(new String(str, start, off-start)); + else + vals.add(new String("")); + String rv[] = new String[vals.size()]; + for (int i = 0; i < rv.length; i++) + rv[i] = (String)vals.get(i); + return rv; + } + + public static void main(String args[]) { + String split[] = split('\n', "hi\nhow are you?\n\nw3wt\n\nthe above is a blank line"); + for (int i = 0; i < split.length; i++) + System.out.println(split[i]); + } +} diff --git a/src/syndie/Intl.java b/src/syndie/Intl.java new file mode 100644 index 0000000..5a8ecff --- /dev/null +++ b/src/syndie/Intl.java @@ -0,0 +1,61 @@ +package syndie; + +import java.io.File; +import java.io.FileInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.util.*; + +/** + * Internationalization helper + */ +public class Intl { + private static Map _loaded = new HashMap(4); + private static Intl _default = new Intl("EN", "GB"); + public static Intl getDefault() { return _default; } + public static Intl get(String lang, String region) { + Intl rv = (Intl)_loaded.get(lang + "_" + region); + if (rv == null) + rv = _default; + return rv; + } + + private String _lang; + private String _region; + private Properties _props; + private Intl(String lang, String region) { + _region = region; + _lang = lang; + _props = new Properties(); + load(); + } + private void load() { + try { + String name = "intl-" + _lang + "_" + _region + ".properties"; + File f = new File("resources", name); + InputStream in = null; + if (f.exists()) { + System.out.println("Loading " + f.getAbsolutePath()); + in = new FileInputStream(f); + } else { + System.out.println("Could not load " + f.getAbsolutePath()); + in = getClass().getResourceAsStream(name); + } + if (in != null) + _props.load(in); + } catch (IOException ioe) { + ioe.printStackTrace(); + } + } + + public String getString(String key) { + String rv = _props.getProperty(key); + if ( (rv == null) && (this != _default) ) + rv = _default.getString(key); + if (rv == null) { + System.out.println("internationalized key not found [" + key + "]"); + rv = key; //rv = ""; + } + return rv; + } +} diff --git a/src/syndie/data/ArchiveInfo.java b/src/syndie/data/ArchiveInfo.java new file mode 100644 index 0000000..e12998f --- /dev/null +++ b/src/syndie/data/ArchiveInfo.java @@ -0,0 +1,31 @@ +package syndie.data; + +/** + * + */ +public class ArchiveInfo { + private long _archiveId; + private boolean _postAllowed; + private boolean _readAllowed; + private SyndieURI _uri; + + public ArchiveInfo() { + _archiveId = -1; + _postAllowed = false; + _readAllowed = false; + _uri = null; + } + + public long getArchiveId() { return _archiveId; } + public void setArchiveId(long id) { _archiveId = id; } + public boolean getPostAllowed() { return _postAllowed; } + public void setPostAllowed(boolean ok) { _postAllowed = ok; } + public boolean getReadAllowed() { return _readAllowed; } + public void setReadAllowed(boolean ok) { _readAllowed = ok; } + public SyndieURI getURI() { return _uri; } + public void setURI(SyndieURI uri) { _uri = uri; } + + public boolean equals(Object o) { return ((ArchiveInfo)o)._archiveId == _archiveId; } + public int hashCode() { return (int)_archiveId; } + public String toString() { return "Archive " + _archiveId + ": " + _uri; } +} diff --git a/src/syndie/data/ChannelInfo.java b/src/syndie/data/ChannelInfo.java new file mode 100644 index 0000000..ee5cf9a --- /dev/null +++ b/src/syndie/data/ChannelInfo.java @@ -0,0 +1,232 @@ +package syndie.data; + +import java.util.*; +import net.i2p.data.*; + +/** + * + * + */ +public class ChannelInfo { + private long _channelId; + private Hash _channelHash; + private SigningPublicKey _identKey; + private PublicKey _encryptKey; + private long _edition; + private String _name; + private String _description; + private boolean _allowPublicPosts; + private boolean _allowPublicReplies; + private long _expiration; + /** set of Strings that anyone can know about the channel */ + private Set _publicTags; + /** set of Strings only authorized people can see */ + private Set _privateTags; + /** set of SigningPublicKeys that are allowed to sign posts to the channel */ + private Set _authorizedPosters; + /** set of SigningPublicKeys that are allowed to sign metadata posts for the channel */ + private Set _authorizedManagers; + /** set of ArchiveInfo instances that anyone can see to get more posts */ + private Set _publicArchives; + /** set of ArchiveInfo instances that only authorized people can see to get more posts */ + private Set _privateArchives; + /** set of SessionKey instances that posts can be encrypted with */ + private Set _readKeys; + /** publicly visible headers delivered with the metadata */ + private Properties _publicHeaders; + /** privately visible headers delivered with the metadata */ + private Properties _privateHeaders; + /** list of ReferenceNode instances that the channel refers to */ + private List _references; + private boolean _readKeyUnknown; + private String _passphrasePrompt; + + public ChannelInfo() { + _channelId = -1; + _channelHash = null; + _identKey = null; + _encryptKey = null; + _edition = -1; + _name = null; + _description = null; + _allowPublicPosts = false; + _allowPublicReplies = false; + _readKeyUnknown = false; + _passphrasePrompt = null; + _expiration = -1; + _publicTags = Collections.EMPTY_SET; + _privateTags = Collections.EMPTY_SET; + _authorizedPosters = Collections.EMPTY_SET; + _authorizedManagers = Collections.EMPTY_SET; + _publicArchives = Collections.EMPTY_SET; + _privateArchives = Collections.EMPTY_SET; + _readKeys = Collections.EMPTY_SET; + _publicHeaders = new Properties(); + _privateHeaders = new Properties(); + _references = Collections.EMPTY_LIST; + } + + public long getChannelId() { return _channelId; } + public void setChannelId(long id) { _channelId = id; } + public Hash getChannelHash() { return _channelHash; } + public void setChannelHash(Hash hash) { _channelHash = hash; } + public SigningPublicKey getIdentKey() { return _identKey; } + public void setIdentKey(SigningPublicKey key) { _identKey = key; } + public PublicKey getEncryptKey() { return _encryptKey; } + public void setEncryptKey(PublicKey key) { _encryptKey = key; } + public long getEdition() { return _edition; } + public void setEdition(long edition) { _edition = edition; } + public String getName() { return _name; } + public void setName(String name) { _name = name; } + public String getDescription() { return _description; } + public void setDescription(String desc) { _description = desc; } + public boolean getAllowPublicPosts() { return _allowPublicPosts; } + public void setAllowPublicPosts(boolean val) { _allowPublicPosts = val; } + public boolean getAllowPublicReplies() { return _allowPublicReplies; } + public void setAllowPublicReplies(boolean val) { _allowPublicReplies = val; } + public long getExpiration() { return _expiration; } + public void setExpiration(long when) { _expiration = when; } + /** set of Strings that anyone can know about the channel */ + public Set getPublicTags() { return _publicTags; } + public void setPublicTags(Set tags) { _publicTags = tags; } + /** set of Strings only authorized people can see */ + public Set getPrivateTags() { return _privateTags; } + public void setPrivateTags(Set tags) { _privateTags = tags; } + /** set of SigningPublicKeys that are allowed to sign posts to the channel */ + public Set getAuthorizedPosters() { return _authorizedPosters; } + public void setAuthorizedPosters(Set who) { _authorizedPosters = who; } + /** set of SigningPublicKeys that are allowed to sign metadata posts for the channel */ + public Set getAuthorizedManagers() { return _authorizedManagers; } + public void setAuthorizedManagers(Set who) { _authorizedManagers = who; } + /** set of ArchiveInfo instances that anyone can see to get more posts */ + public Set getPublicArchives() { return _publicArchives; } + public void setPublicArchives(Set where) { _publicArchives = where; } + /** set of ArchiveInfo instances that only authorized people can see to get more posts */ + public Set getPrivateArchives() { return _privateArchives; } + public void setPrivateArchives(Set where) { _privateArchives = where; } + /** set of SessionKey instances that posts can be encrypted with */ + public Set getReadKeys() { return _readKeys; } + public void setReadKeys(Set keys) { _readKeys = keys; } + /** publicly visible headers delivered with the metadata */ + public Properties getPublicHeaders() { return _publicHeaders; } + public void setPublicHeaders(Properties headers) { _publicHeaders = headers; } + /** privately visible headers delivered with the metadata */ + public Properties getPrivateHeaders() { return _privateHeaders; } + public void setPrivateHeaders(Properties props) { _privateHeaders = props; } + /** list of ReferenceNode instances that the channel refers to */ + public List getReferences() { return _references; } + public void setReferences(List refs) { _references = refs; } + public boolean getReadKeyUnknown() { return _readKeyUnknown; } + public void setReadKeyUnknown(boolean unknown) { _readKeyUnknown = unknown; } + public String getPassphrasePrompt() { return _passphrasePrompt; } + public void setPassphrasePrompt(String prompt) { _passphrasePrompt = prompt; } + + public boolean equals(Object obj) { return ((ChannelInfo)obj)._channelId == _channelId; } + public int hashCode() { return (int)_channelId; } + public String toString() { + StringBuffer buf = new StringBuffer(); + if (_channelHash == null) + buf.append("Channel not yet defined (edition " + _edition + ")\n"); + else + buf.append("Channel " + _channelHash.toBase64() + " (" + _channelId + " edition " + _edition + ")\n"); + if (_encryptKey == null) + buf.append("Replies should be encrypted to a key not yet determined\n"); + else + buf.append("Replies should be encrypted to " + _encryptKey.calculateHash().toBase64() + " / " + _encryptKey.toBase64() + "\n"); + if (_name == null) + buf.append("Suggested name: not yet determined\n"); + else + buf.append("Suggested name: " + _name + "\n"); + if (_description == null) + buf.append("Suggested description: not yet determined\n"); + else + buf.append("Suggested description: " + _description + "\n"); + if (_expiration <= 0) + buf.append("Channel expiration: never\n"); + else + buf.append("Channel expiration: " + new Date(_expiration) + "\n"); + buf.append("Allow anyone to post new threads? " + _allowPublicPosts + "\n"); + buf.append("Allow anyone to post replies to existing threads? " + _allowPublicReplies + "\n"); + buf.append("Publicly known tags: " + _publicTags + "\n"); + buf.append("Hidden tags: " + _privateTags + "\n"); + + buf.append("Allow posts by: "); + for (Iterator iter = _authorizedPosters.iterator(); iter.hasNext(); ) { + SigningPublicKey key = (SigningPublicKey)iter.next(); + buf.append(key.calculateHash().toBase64()).append(", "); + } + // managers can post too + for (Iterator iter = _authorizedManagers.iterator(); iter.hasNext(); ) { + SigningPublicKey key = (SigningPublicKey)iter.next(); + buf.append(key.calculateHash().toBase64()).append(", "); + } + if (_channelHash != null) + buf.append(_channelHash.toBase64()); + else + buf.append("the channel identity"); + buf.append("\n"); + + buf.append("Allow management by: "); + for (Iterator iter = _authorizedManagers.iterator(); iter.hasNext(); ) { + SigningPublicKey key = (SigningPublicKey)iter.next(); + buf.append(key.calculateHash().toBase64()).append(", "); + } + if (_channelHash != null) + buf.append(_channelHash.toBase64()); + else + buf.append("the channel identity"); + buf.append("\n"); + if ( (_publicArchives != null) && (_publicArchives.size() > 0) ) { + buf.append("Publicly known channel archives: \n"); + for (Iterator iter = _publicArchives.iterator(); iter.hasNext(); ) { + ArchiveInfo archive = (ArchiveInfo)iter.next(); + buf.append('\t').append(archive).append('\n'); + } + } + if ( (_privateArchives != null) && (_privateArchives.size() > 0) ) { + buf.append("Hidden channel archives: \n"); + for (Iterator iter = _privateArchives.iterator(); iter.hasNext(); ) { + ArchiveInfo archive = (ArchiveInfo)iter.next(); + buf.append('\t').append(archive).append('\n'); + } + } + if (_readKeys != null) + buf.append("Known channel read keys: " + _readKeys.size() + "\n"); + + Set headers = new TreeSet(); + if (_publicHeaders != null) + headers.addAll(_publicHeaders.keySet()); + if (_privateHeaders != null) + headers.addAll(_privateHeaders.keySet()); + if (headers.size() > 0) { + buf.append("Metadata headers:\n"); + for (Iterator iter = headers.iterator(); iter.hasNext(); ) { + String name = (String)iter.next(); + boolean isPublic = false; + String val = null; + if (_privateHeaders != null) + val = _privateHeaders.getProperty(name); + if (val != null) { + isPublic = false; + } else { + isPublic = true; + val = _publicHeaders.getProperty(name); + } + buf.append("\t"); + if (isPublic) + buf.append("+"); + else + buf.append("-"); + buf.append(name).append(":\t").append(val).append("\n"); + } + buf.append("(hidden headers prepended with -, public headers prepended with +)\n"); + } + if (_references.size() > 0) { + String refs = ReferenceNode.walk(_references); + buf.append("References: \n"); + buf.append(refs); + buf.append("\n"); + } + return buf.toString(); + } +} diff --git a/src/syndie/data/Enclosure.java b/src/syndie/data/Enclosure.java new file mode 100644 index 0000000..f3ff333 --- /dev/null +++ b/src/syndie/data/Enclosure.java @@ -0,0 +1,439 @@ +package syndie.data; + +import java.io.*; +import java.net.URISyntaxException; +import java.text.ParseException; +import java.text.SimpleDateFormat; +import java.util.*; +import net.i2p.data.*; +import gnu.crypto.hash.Sha256Standalone; +import syndie.Constants; + +/** + * Handle the parsing of a raw message + * + */ +public class Enclosure { + /** full enclosure formatting version */ + private String _enclosureType; + /** headers visible to all */ + private Properties _publicHeaders; + /** cached unparsed public headers */ + private byte _publicHeaderData[]; + /** encrypted/padded/zipped/etc data */ + private byte[] _data; + /** hash from the beginning of the enclosure through the data */ + private Hash _authorizationHash; + /** + * signature of the enclosure up through the data by an authorized key + * (or just random junk if unauthorized) + */ + private Signature _authorizationSig; + /** hash from the beginning of the enclosure through the authorization signature */ + private Hash _authenticationHash; + /** + * signature of the enclosure up through the authorization signature + * by the nym. the nym may not be known prior to unencrypting the data + */ + private Signature _authenticationSig; + /** original signature data as stored in the enclosure, while the authenticationSig itself + * may be adjusted as controlled by a private header value */ + private byte _authenticationSigOrig[]; + + public Enclosure(InputStream raw) throws IOException { + _enclosureType = null; + _publicHeaders = new Properties(); + _publicHeaderData = null; + _data = null; + _authorizationHash = null; + _authorizationSig = null; + _authenticationHash = null; + _authenticationSig = null; + _authenticationSigOrig = null; + load(raw); + } + + public boolean getLoaded() { return _authorizationSig != null; } + public String getEnclosureType() { return _enclosureType; } + public boolean isReply() { return msgType(Constants.MSG_TYPE_REPLY); } + public boolean isPost() { return msgType(Constants.MSG_TYPE_POST); } + public boolean isMeta() { return msgType(Constants.MSG_TYPE_META); } + private boolean msgType(String queryType) { + String type = getHeaderString(Constants.MSG_HEADER_TYPE); + if (type != null) + return type.equals(queryType); + else + return false; + } + public Properties getHeaders() { return _publicHeaders; } + public String getHeaderString(String key) { return _publicHeaders.getProperty(key); } + public byte[] getHeaderBytes(String key) { + return toBytes(_publicHeaders.getProperty(key)); + } + public static byte[] toBytes(String val) { + if (val == null) + return null; + else + return Base64.decode(val); + } + public SyndieURI getHeaderURI(String key) { + return toURI(_publicHeaders.getProperty(key)); + } + public static SyndieURI toURI(String val) { + if (val == null) { + return null; + } else { + try { + return new SyndieURI(val); + } catch (URISyntaxException ex) { + return null; + } + } + } + public SyndieURI[] getHeaderURIs(String key) { + return toURIs(_publicHeaders.getProperty(key)); + } + public static SyndieURI[] toURIs(String val) { + if (val == null) { + return null; + } else { + String str[] = Constants.split('\t', val); // val.split("\t"); + if (str != null) { + SyndieURI uris[] = new SyndieURI[str.length]; + int invalid = 0; + for (int i = 0; i < str.length; i++) { + try { + uris[i] = new SyndieURI(str[i]); + } catch (URISyntaxException ex) { + invalid++; + uris[i] = null; + } + } + if (invalid > 0) { + SyndieURI rv[] = new SyndieURI[str.length - invalid]; + int cur = 0; + for (int i = 0; i < str.length; i++) { + if (uris[i] != null) { + rv[cur] = uris[i]; + cur++; + } + } + return rv; + } else { + return uris; + } + } else { + return null; + } + } + } + + public String[] getHeaderStrings(String key) { + return toStrings(_publicHeaders.getProperty(key)); + } + public static String[] toStrings(String val) { + if (val == null) + return null; + else + return Constants.split('\t', val); //val.split("\t"); + } + public Boolean getHeaderBoolean(String key) { + return toBoolean(_publicHeaders.getProperty(key)); + } + public static Boolean toBoolean(String val) { + if (val == null) + return null; + else + return Boolean.valueOf(val); + } + public Long getHeaderLong(String key) { + return toLong(_publicHeaders.getProperty(key)); + } + public static Long toLong(String val) { + if (val == null) { + return null; + } else { + try { + return Long.valueOf(val); + } catch (NumberFormatException nfe) { + return null; + } + } + } + public Date getHeaderDate(String key) { + return toDate(_publicHeaders.getProperty(key)); + } + private static final SimpleDateFormat _dateFormat = new SimpleDateFormat("yyyyMMdd"); + public static Date toDate(String val) { + if (val == null) { + return null; + } else { + try { + synchronized (_dateFormat) { + return _dateFormat.parse(val); + } + } catch (ParseException pe) { + return null; + } + } + } + public SessionKey getHeaderSessionKey(String key) { + return toSessionKey(_publicHeaders.getProperty(key)); + } + public static SessionKey toSessionKey(String val) { + if (val == null) { + return null; + } else { + byte b[] = Base64.decode(val); + if ( (b != null) && (b.length == SessionKey.KEYSIZE_BYTES) ) + return new SessionKey(b); + else + return null; + } + } + public SessionKey[] getHeaderSessionKeys(String key) { + return toSessionKeys(_publicHeaders.getProperty(key)); + } + public static SessionKey[] toSessionKeys(String val) { + if (val == null) { + return null; + } else { + String str[] = Constants.split('\t', val); //val.split("\t"); + if (str != null) { + SessionKey keys[] = new SessionKey[str.length]; + int invalid = 0; + for (int i = 0; i < keys.length; i++) { + byte key[] = Base64.decode(str[i]); + if ( (key != null) && (key.length == SessionKey.KEYSIZE_BYTES) ) + keys[i] = new SessionKey(key); + else + invalid++; + } + if (invalid > 0) { + SessionKey rv[] = new SessionKey[str.length - invalid]; + int cur = 0; + for (int i = 0; i < str.length; i++) { + if (keys[i] != null) { + rv[cur] = keys[i]; + cur++; + } + } + return rv; + } else { + return keys; + } + } else { + return null; + } + } + } + public SigningPublicKey getHeaderSigningKey(String key) { + return toSigningKey(_publicHeaders.getProperty(key)); + } + public static SigningPublicKey toSigningKey(String str) { + if (str == null) { + return null; + } else { + byte val[] = Base64.decode(str); + if ( (val != null) && (val.length == SigningPublicKey.KEYSIZE_BYTES) ) + return new SigningPublicKey(val); + else + return null; + } + } + public SigningPublicKey[] getHeaderSigningKeys(String key) { + return toSigningKeys(toStrings(_publicHeaders.getProperty(key))); + } + public static SigningPublicKey[] toSigningKeys(String vals[]) { + if (vals == null) { + return null; + } else { + SigningPublicKey keys[] = new SigningPublicKey[vals.length]; + int invalid = 0; + for (int i = 0; i < vals.length; i++) { + byte val[] = Base64.decode(vals[i]); + if ( (val != null) && (val.length == SigningPublicKey.KEYSIZE_BYTES) ) + keys[i] = new SigningPublicKey(val); + else + invalid++; + } + if (invalid > 0) { + SigningPublicKey rv[] = new SigningPublicKey[vals.length - invalid]; + int cur = 0; + for (int i = 0; i < vals.length; i++) { + if (keys[i] != null) { + rv[cur] = keys[i]; + cur++; + } + } + return rv; + } else { + return keys; + } + } + } + public PublicKey getHeaderEncryptKey(String key) { + return toEncryptKey(_publicHeaders.getProperty(key)); + } + public static PublicKey toEncryptKey(String str) { + if (str == null) { + return null; + } else { + byte val[] = Base64.decode(str); + if ( (val != null) && (val.length == PublicKey.KEYSIZE_BYTES) ) + return new PublicKey(val); + else + return null; + } + } + + public int getDataSize() { return (_data != null ? _data.length : 0); } + public InputStream getData() { return new ByteArrayInputStream(_data); } + public void discardData() { _data = null; } + + public Hash getAuthorizationHash() { return _authorizationHash; } + public Signature getAuthorizationSig() { return _authorizationSig; } + public Hash getAuthenticationHash() { return _authenticationHash; } + public Signature getAuthenticationSig() { return _authenticationSig; } + + public String toString() { + StringBuffer rv = new StringBuffer(); + rv.append("Enclosure ").append(_enclosureType).append(" with headers {"); + for (Iterator iter = _publicHeaders.keySet().iterator(); iter.hasNext(); ) { + String key = (String)iter.next(); + String val = _publicHeaders.getProperty(key); + rv.append('\'').append(key).append("' => '").append(val).append("\'"); + if (iter.hasNext()) + rv.append(", "); + } + rv.append("}"); + return rv.toString(); + } + + private void load(InputStream raw) throws IOException { + Sha256Standalone hash = new Sha256Standalone(); + hash.reset(); + _enclosureType = DataHelper.readLine(raw, hash); + + // read the headers + ByteArrayOutputStream baos = new ByteArrayOutputStream(); + StringBuffer buf = new StringBuffer(512); + while (DataHelper.readLine(raw, buf, hash)) { + int len = buf.length(); + if (len <= 0) break; + baos.write(DataHelper.getUTF8(buf.toString()+"\n")); + int split = buf.indexOf("="); + if (split <= 0) throw new IOException("Invalid header: " + buf.toString()); + String key = buf.substring(0, split).trim(); + String val = null; + if (split+1 < len) + val = buf.substring(split+1).trim(); + else + val = ""; + + _publicHeaders.setProperty(key, val); + buf.setLength(0); + } + _publicHeaderData = baos.toByteArray(); + + // now comes the size header + String sz = DataHelper.readLine(raw, hash); + if (sz == null) throw new IOException("Missing size header"); + int split = sz.indexOf('='); + if ( (split <= 0) || (split + 1 >= sz.length()) ) throw new IOException("Invalid size header: " + sz); + String key = sz.substring(0, split); + String val = sz.substring(split+1); + if (!Constants.MSG_HEADER_SIZE.equals(key.trim())) throw new IOException("Size header expected instead of " + sz); + int bytes = -1; + try { + bytes = Integer.parseInt(val.trim()); + } catch (NumberFormatException nfe) { + throw new IOException("Invalid size header: " + bytes); + } + if (bytes < 0) throw new IOException("Invalid size header: " + bytes); + + // load the data into _data + loadData(raw, bytes, hash); + + _authorizationHash = new Hash(((Sha256Standalone)hash.clone()).digest()); + _authorizationSig = readSig(raw, hash); + + _authenticationHash = new Hash(hash.digest()); + _authenticationSig = readSig(raw, hash); + _authenticationSigOrig = _authenticationSig.getData(); + } + + public void store(String filename) throws IOException { + File out = new File(filename); + //if (out.exists()) throw new IOException("File already exists"); + OutputStream raw = new FileOutputStream(out); + try { + raw.write(DataHelper.getUTF8(_enclosureType+"\n")); + raw.write(_publicHeaderData); + raw.write(DataHelper.getUTF8("\n")); + raw.write(DataHelper.getUTF8(Constants.MSG_HEADER_SIZE + "=" + _data.length + "\n")); + raw.write(_data); + raw.write(DataHelper.getUTF8("AuthorizationSig=" + Base64.encode(_authorizationSig.getData())+"\n")); + raw.write(DataHelper.getUTF8("AuthenticationSig=" + Base64.encode(_authenticationSigOrig)+"\n")); + } catch (IOException ioe) { + try { raw.close(); } catch (IOException ioe2) {} + raw = null; + out.delete(); + throw ioe; + } finally { + if (raw != null) raw.close(); + } + } + + private void loadData(InputStream raw, int numBytes, Sha256Standalone hash) throws IOException { + /* + File bufDir = new File("./syndb_temp"); + bufDir.mkdir(); + File tmp = File.createTempFile("enclosure", "dat", bufDir); + FileOutputStream fos = new FileOutputStream(tmp); + byte buf[] = new byte[4096]; + int remaining = numBytes; + while (remaining > 0) { + int toRead = Math.min(remaining, buf.length); + int read = raw.read(buf, 0, toRead); + if (read == -1) + throw new IOException("End of the data reached with " + remaining + " bytes remaining"); + fos.write(buf, 0, read); + hash.update(buf, 0, read); + remaining -= read; + } + fos.close(); + _dataFile = tmp; + _data = new FileInputStream(tmp); + _dataSize = numBytes; + tmp.deleteOnExit(); + */ + ByteArrayOutputStream baos = new ByteArrayOutputStream(); + byte buf[] = new byte[4096]; + int remaining = numBytes; + while (remaining > 0) { + int toRead = Math.min(remaining, buf.length); + int read = raw.read(buf, 0, toRead); + if (read == -1) + throw new IOException("End of the data reached with " + remaining + " bytes remaining"); + baos.write(buf, 0, read); + hash.update(buf, 0, read); + remaining -= read; + } + _data = baos.toByteArray(); + } + + private Signature readSig(InputStream raw, Sha256Standalone hash) throws IOException { + String rem = DataHelper.readLine(raw, hash); + if (rem != null) { + int start = rem.indexOf('='); + if ( (start < 0) || (start+1 >= rem.length()) ) + throw new IOException("No signature"); + rem = rem.substring(start+1); + } + byte val[] = Base64.decode(rem); + if ( (val == null) || (val.length != Signature.SIGNATURE_BYTES) ) + throw new IOException("Not enough data for the signature (" + rem + "/" + (val != null ? val.length : 0) + ")"); + return new Signature(val); + } +} diff --git a/src/syndie/data/EnclosureBody.java b/src/syndie/data/EnclosureBody.java new file mode 100644 index 0000000..a4d1d80 --- /dev/null +++ b/src/syndie/data/EnclosureBody.java @@ -0,0 +1,342 @@ +package syndie.data; + +import gnu.crypto.hash.Sha256Standalone; +import java.io.*; +import java.util.*; +import java.util.zip.ZipEntry; +import java.util.zip.ZipInputStream; +import net.i2p.I2PAppContext; +import net.i2p.crypto.AESInputStream; +import net.i2p.data.*; +import net.i2p.util.Log; + +/** + * + */ +public class EnclosureBody { + private I2PAppContext _context; + private Log _log; + /** filename to byte[] */ + private Map _entries; + /** key to value */ + private Properties _headers; + /** list of config settings (Properties) for each page */ + private List _pageConfig; + /** list of config settings (Properties) for each attachment */ + private List _attachConfig; + private int _pages; + private int _attachments; + private List _references; + + public static final String ENTRY_AVATAR = "avatar32.png"; + public static final String ENTRY_HEADERS = "headers.dat"; + public static final String ENTRY_PAGE_PREFIX = "page"; + public static final String ENTRY_PAGE_DATA_SUFFIX = ".dat"; + public static final String ENTRY_PAGE_CONFIG_SUFFIX = ".cfg"; + public static final String ENTRY_ATTACHMENT_PREFIX = "attachment"; + public static final String ENTRY_ATTACHMENT_DATA_SUFFIX = ".dat"; + public static final String ENTRY_ATTACHMENT_CONFIG_SUFFIX = ".cfg"; + public static final String ENTRY_REFERENCES = "references.cfg"; + + protected EnclosureBody(I2PAppContext ctx) { + _context = ctx; + _log = ctx.logManager().getLog(getClass()); + _entries = new HashMap(); + _pageConfig = new ArrayList(); + _attachConfig = new ArrayList(); + _references = new ArrayList(); + _headers = new Properties(); + _pages = 0; + _attachments = 0; + } + + /** + * Decrypt and parse up the enclosure body with the given read key, throwing a DFE if + * the decryption or parsing fails. + * format: IV + E(rand(nonzero) padding + 0 + internalSize + totalSize + data + rand, IV, key)+HMAC(bodySection, H(bodyKey+IV)) + */ + public EnclosureBody(I2PAppContext ctx, InputStream data, int size, SessionKey key) throws IOException, DataFormatException { + this(ctx); + byte iv[] = new byte[16]; + if (DataHelper.read(data, iv) != 16) throw new IOException("Not enough data for the IV"); + byte enc[] = new byte[size-16]; + int read = DataHelper.read(data, enc); + if (read != size-16) throw new IOException("Not enough data for the payload (size=" + (size-16) + ", read=" + read); + byte dec[] = new byte[size-16]; + ctx.aes().decrypt(enc, 0, dec, 0, key, iv, enc.length-32); + + int start = 0; + int pad = 0; + while (start < size && dec[start] != 0x0) { + start++; + pad++; + } + start++; + int off = start; + int internalSize = (int)DataHelper.fromLong(dec, off, 4); + off += 4; + int totalSize = (int)DataHelper.fromLong(dec, off, 4); + off += 4; + if (totalSize != (size-16)) { + if (_log.shouldLog(Log.DEBUG)) { + Sha256Standalone dbg = new Sha256Standalone(); + dbg.update(enc); + byte h[] = dbg.digest(); + _log.debug("borked: off=" + off); + _log.debug("borked: Encrypted body hashes to " + Base64.encode(h)); + _log.debug("borked: key used: " + Base64.encode(key.getData())); + _log.debug("borked: IV used: " + Base64.encode(iv)); + _log.debug("borked: pad: " + pad); + _log.debug("borked: totalSize: " + totalSize); + _log.debug("borked: size: " + size); + _log.debug("borked: internalSize: " + internalSize); + } + throw new DataFormatException("Invalid total size (" + totalSize + "/" + size + ")"); + } + if (internalSize + start + 8 > totalSize) throw new DataFormatException("Invalid internal size (" + internalSize + "), start (" + start + " iv=" + Base64.encode(iv) + " / pad=" + pad + ")"); + + byte hmacPreKey[] = new byte[SessionKey.KEYSIZE_BYTES+iv.length]; + System.arraycopy(key.getData(), 0, hmacPreKey, 0, SessionKey.KEYSIZE_BYTES); + System.arraycopy(iv, 0, hmacPreKey, SessionKey.KEYSIZE_BYTES, iv.length); + byte hmacKey[] = ctx.sha().calculateHash(hmacPreKey).getData(); + boolean hmacOK = ctx.hmac256().verify(new SessionKey(hmacKey), enc, 0, enc.length-32, enc, enc.length-32, 32); + if (!hmacOK) { + if (_log.shouldLog(Log.DEBUG)) { + _log.debug("borked hmac: hmacKey: " + Base64.encode(hmacKey)); + _log.debug("borked hmac: readMAC: " + Base64.encode(enc, enc.length-32, 32)); + } + throw new DataFormatException("Invalid HMAC, but valid sizes"); + } + + parse(new ByteArrayInputStream(dec, off, internalSize)); + } + + /** + * Decrypt and parse up the enclosure body with the given reply key, throwing a DFE if + * the decryption or parsing fails + */ + public EnclosureBody(I2PAppContext ctx, InputStream data, int size, PrivateKey key) throws IOException, DataFormatException { + this(ctx); + //if (true) throw new RuntimeException("Not yet implemented"); + byte asym[] = new byte[514]; + int read = DataHelper.read(data, asym); + if (read != asym.length) throw new IOException("Not enough data for the asym block (" + read + ")"); + //System.out.println("Asym block[" + asym.length + "]:\n" + Base64.encode(asym) + "\npubKey:\n" + Base64.encode(ctx.keyGenerator().getPublicKey(key).getData())); + byte decrypted[] = ctx.elGamalEngine().decrypt(asym, key); + if (decrypted == null) throw new DataFormatException("Decrypt failed"); + + Hash ivCalc = ctx.sha().calculateHash(decrypted, 0, 16); + byte bodyKeyData[] = new byte[SessionKey.KEYSIZE_BYTES]; + System.arraycopy(decrypted, 16, bodyKeyData, 0, bodyKeyData.length); + SessionKey bodyKey = new SessionKey(bodyKeyData); + + byte enc[] = new byte[size-asym.length-32]; + read = DataHelper.read(data, enc); + if (read != size-asym.length-32) throw new IOException("Not enough data for the payload (size=" + (size-asym.length) + ", read=" + read); + byte macRead[] = new byte[32]; + read = DataHelper.read(data, macRead); + if (read != macRead.length) throw new IOException("Not enough data for the mac"); + byte dec[] = new byte[enc.length]; + ctx.aes().decrypt(enc, 0, dec, 0, bodyKey, ivCalc.getData(), 0, enc.length); + + int start = 0; + while (start < size && dec[start] != 0x0) + start++; + start++; + int off = start; + int internalSize = (int)DataHelper.fromLong(dec, off, 4); + off += 4; + int totalSize = (int)DataHelper.fromLong(dec, off, 4); + off += 4; + if (totalSize != (size-asym.length)) throw new DataFormatException("Invalid total size (" + totalSize + "/" + size + ")"); + if (internalSize + start + 8 > totalSize) throw new DataFormatException("Invalid internal size (" + internalSize + "), start (" + start + ")"); + + // check the hmac + byte hmacPreKey[] = new byte[SessionKey.KEYSIZE_BYTES+16]; + System.arraycopy(bodyKeyData, 0, hmacPreKey, 0, SessionKey.KEYSIZE_BYTES); + System.arraycopy(ivCalc.getData(), 0, hmacPreKey, SessionKey.KEYSIZE_BYTES, 16); + byte hmacKey[] = ctx.sha().calculateHash(hmacPreKey).getData(); + boolean hmacOK = ctx.hmac256().verify(new SessionKey(hmacKey), enc, 0, enc.length, macRead, 0, macRead.length); + if (!hmacOK) { + if (_log.shouldLog(Log.DEBUG)) { + _log.debug("borked hmac: hmacKey: " + Base64.encode(hmacKey)); + _log.debug("borked hmac: readMAC: " + Base64.encode(macRead)); + } + throw new DataFormatException("Invalid HMAC, but valid sizes"); + } + + parse(new ByteArrayInputStream(dec, off, internalSize)); + } + + public int getPages() { return _pages; } + public int getAttachments() { return _attachments; } + public InputStream getAvatar() { + if (_entries.containsKey(ENTRY_AVATAR)) + return new ByteArrayInputStream((byte[])_entries.get(ENTRY_AVATAR)); + else + return null; + } + public Set getPageConfigKeys(int pageNum) { return ((Properties)_pageConfig.get(pageNum)).keySet(); } + public Set getAttachmentConfigKeys(int attachNum) { return ((Properties)_attachConfig.get(attachNum)).keySet(); } + public Set getHeaderKeys() { return _headers.keySet(); } + public int getReferenceRootCount() { return _references.size(); } + public ReferenceNode getReferenceRoot(int index) { return (ReferenceNode)_references.get(index); } + public Properties getHeaders() { return _headers; } + + public String getHeaderString(String key) { return _headers.getProperty(key); } + public byte[] getHeaderBytes(String key) { return Enclosure.toBytes(_headers.getProperty(key)); } + public SyndieURI getHeaderURI(String key) { return Enclosure.toURI(_headers.getProperty(key)); } + public SyndieURI[] getHeaderURIs(String key) { return Enclosure.toURIs(_headers.getProperty(key)); } + public String[] getHeaderStrings(String key) { return Enclosure.toStrings(_headers.getProperty(key)); } + public Boolean getHeaderBoolean(String key) { return Enclosure.toBoolean(_headers.getProperty(key)); } + public Long getHeaderLong(String key) { return Enclosure.toLong(_headers.getProperty(key)); } + public SessionKey getHeaderSessionKey(String key) { return Enclosure.toSessionKey(_headers.getProperty(key)); } + public SessionKey[] getHeaderSessionKeys(String key) { return Enclosure.toSessionKeys(_headers.getProperty(key)); } + public SigningPublicKey getHeaderSigningKey(String key) { return Enclosure.toSigningKey(_headers.getProperty(key)); } + public SigningPublicKey[] getHeaderSigningKeys(String key) { return Enclosure.toSigningKeys(Enclosure.toStrings(_headers.getProperty(key))); } + public PublicKey getHeaderEncryptKey(String key) { return Enclosure.toEncryptKey(_headers.getProperty(key)); } + public Date getHeaderDate(String key) { return Enclosure.toDate(_headers.getProperty(key)); } + + public String getPageConfigString(int page, String key) { return getPageConfig(page).getProperty(key); } + public byte[] getPageConfigBytes(int page, String key) { return Enclosure.toBytes(getPageConfig(page).getProperty(key)); } + public SyndieURI getPageConfigURI(int page, String key) { return Enclosure.toURI(getPageConfig(page).getProperty(key)); } + public String[] getPageConfigStrings(int page, String key) { return Enclosure.toStrings(getPageConfig(page).getProperty(key)); } + public Boolean getPageConfigBoolean(int page, String key) { return Enclosure.toBoolean(getPageConfig(page).getProperty(key)); } + public Long getPageConfigLong(int page, String key) { return Enclosure.toLong(getPageConfig(page).getProperty(key)); } + public SessionKey getPageConfigSessionKey(int page, String key) { return Enclosure.toSessionKey(getPageConfig(page).getProperty(key)); } + public SigningPublicKey getPageConfigSigningKey(int page, String key) { return Enclosure.toSigningKey(getPageConfig(page).getProperty(key)); } + public SigningPublicKey[] getPageConfigSigningKeys(int page, String key) { return Enclosure.toSigningKeys(Enclosure.toStrings(getPageConfig(page).getProperty(key))); } + public PublicKey getPageConfigEncryptKey(int page, String key) { return Enclosure.toEncryptKey(getPageConfig(page).getProperty(key)); } + public Date getPageConfigDate(int page, String key) { return Enclosure.toDate(getPageConfig(page).getProperty(key)); } + + public String getAttachmentConfigString(int attach, String key) { return getAttachmentConfig(attach).getProperty(key); } + public byte[] getAttachmentConfigBytes(int attach, String key) { return Enclosure.toBytes(getAttachmentConfig(attach).getProperty(key)); } + public SyndieURI getAttachmentConfigURI(int attach, String key) { return Enclosure.toURI(getAttachmentConfig(attach).getProperty(key)); } + public String[] getAttachmentConfigStrings(int attach, String key) { return Enclosure.toStrings(getAttachmentConfig(attach).getProperty(key)); } + public Boolean getAttachmentConfigBoolean(int attach, String key) { return Enclosure.toBoolean(getAttachmentConfig(attach).getProperty(key)); } + public Long getAttachmentConfigLong(int attach, String key) { return Enclosure.toLong(getAttachmentConfig(attach).getProperty(key)); } + public SessionKey getAttachmentConfigSessionKey(int attach, String key) { return Enclosure.toSessionKey(getAttachmentConfig(attach).getProperty(key)); } + public SigningPublicKey getAttachmentConfigSigningKey(int attach, String key) { return Enclosure.toSigningKey(getAttachmentConfig(attach).getProperty(key)); } + public SigningPublicKey[] getAttachmentConfigSigningKeys(int attach, String key) { return Enclosure.toSigningKeys(Enclosure.toStrings(getAttachmentConfig(attach).getProperty(key))); } + public PublicKey getAttachmentConfigEncryptKey(int attach, String key) { return Enclosure.toEncryptKey(getAttachmentConfig(attach).getProperty(key)); } + public Date getAttachmentConfigDate(int attach, String key) { return Enclosure.toDate(getAttachmentConfig(attach).getProperty(key)); } + + public byte[] getPage(int page) { return (byte[])_entries.get(ENTRY_PAGE_PREFIX + page + ENTRY_PAGE_DATA_SUFFIX); } + public byte[] getAttachment(int attachment) { return (byte[])_entries.get(ENTRY_ATTACHMENT_PREFIX + attachment + ENTRY_ATTACHMENT_DATA_SUFFIX); } + + public String toString() { + StringBuffer rv = new StringBuffer(); + rv.append("EnclosureBody with ").append(_pages).append(" pages, ").append(_attachments).append(" attachments, and private headers of {"); + for (Iterator iter = _headers.keySet().iterator(); iter.hasNext(); ) { + String key = (String)iter.next(); + String val = _headers.getProperty(key); + rv.append('\'').append(key).append("' => '").append(val).append("\'"); + if (iter.hasNext()) + rv.append(", "); + } + rv.append("}"); + return rv.toString(); + } + + + public Properties getPageConfig(int pageNum) { return (Properties)_pageConfig.get(pageNum); } + public Properties getAttachmentConfig(int attachNum) { return (Properties)_attachConfig.get(attachNum); } + + private void parse(InputStream zipData) throws IOException { + unzip(zipData); + _headers = parseProps(ENTRY_HEADERS); + for (int i = 0; i < _pages; i++) + _pageConfig.add(parseProps(ENTRY_PAGE_PREFIX + i + ENTRY_PAGE_CONFIG_SUFFIX)); + for (int i = 0; i < _attachments; i++) + _attachConfig.add(parseProps(ENTRY_ATTACHMENT_PREFIX + i + ENTRY_ATTACHMENT_CONFIG_SUFFIX)); + // parse the references + byte refs[] = (byte[])_entries.get(ENTRY_REFERENCES); + if (refs != null) { + //System.out.println("References entry found, size: " + refs.length); + _references.addAll(ReferenceNode.buildTree(new ByteArrayInputStream(refs))); + } else { + //System.out.println("No references entry found"); + } + } + private void unzip(InputStream zipData) throws IOException { + ZipInputStream in = new ZipInputStream(zipData); + ZipEntry entry = null; + while ( (entry = in.getNextEntry()) != null) { + String name = entry.getName(); + byte data[] = null; + long sz = entry.getSize(); + // spec & sun sayeth --1 implies unknown size, but kaffe [1.1.7] uses 0 too + if ( (sz == -1) || (sz == 0) ) { + ByteArrayOutputStream baos = new ByteArrayOutputStream(); + byte buf[] = new byte[4096]; + int read = -1; + while ( (read = in.read(buf)) != -1) + baos.write(buf, 0, read); + data = baos.toByteArray(); + } else { + data = new byte[(int)sz]; + if (DataHelper.read(in, data) != sz) + throw new IOException("Not enough data for " + name); + } + if (name.startsWith(ENTRY_ATTACHMENT_PREFIX) && name.endsWith(ENTRY_ATTACHMENT_DATA_SUFFIX)) + _attachments++; + else if (name.startsWith(ENTRY_PAGE_PREFIX) && name.endsWith(ENTRY_PAGE_DATA_SUFFIX)) + _pages++; + _entries.put(name, data); + } + } + private Properties parseProps(String entry) { + Properties rv = new Properties(); + byte data[] = (byte[])_entries.get(entry); + if (data == null) { + //System.out.println("Entry " + entry + " does not exist"); + return new Properties(); + } + parseProps(data, rv); + return rv; + } + private static void parseProps(byte data[], Properties rv) { + //System.out.println("parsing props: " + new String(data)); + int off = 0; + int dataStart = off; + int valStart = -1; + while (off < data.length) { + if (data[off] == '\n') { + try { + String key = new String(data, dataStart, valStart-1-dataStart, "UTF-8"); + String val = new String(data, valStart, off-valStart, "UTF-8"); + //System.out.println("Prop parsed: [" + key + "] = [" + val + "] (dataStart=" + dataStart + " valStart " + valStart + " off " + off + ")"); + rv.setProperty(key, val); + } catch (UnsupportedEncodingException uee) { + // + } catch (RuntimeException re) { + //re.printStackTrace(); + } + dataStart = off+1; + valStart = -1; + } else if ( (data[off] == '=') && (valStart == -1) ) { + valStart = off+1; + } else if (off + 1 >= data.length) { + if ( ( (valStart-1-dataStart) > 0) && ( (off+1-valStart) > 0) ) { + try { + String key = new String(data, dataStart, valStart-1-dataStart, "UTF-8"); + String val = new String(data, valStart, off+1-valStart, "UTF-8"); + //System.out.println("End prop parsed: [" + key + "] = [" + val + "] (dataStart=" + dataStart + " valStart " + valStart + " off " + off + ")"); + rv.setProperty(key, val); + } catch (UnsupportedEncodingException uee) { + // + } catch (RuntimeException re) { + //re.printStackTrace(); + } + } + } + off++; + } + } + + public static void main(String args[]) { + Properties props = new Properties(); + parseProps("a=b\nc=d".getBytes(), props); + System.out.println("props: " + props); + } +} diff --git a/src/syndie/data/MessageInfo.java b/src/syndie/data/MessageInfo.java new file mode 100644 index 0000000..69c4940 --- /dev/null +++ b/src/syndie/data/MessageInfo.java @@ -0,0 +1,204 @@ +package syndie.data; + +import java.util.Collections; +import java.util.Date; +import java.util.List; +import java.util.Set; +import net.i2p.data.Hash; + +/** + * + */ +public class MessageInfo { + private long _internalId; + private SyndieURI _uri; + private long _authorChannelId; + private long _messageId; + private long _scopeChannelId; + private long _targetChannelId; + private Hash _targetChannel; + private String _subject; + private Hash _overwriteChannel; + private long _overwriteMessage; + private boolean _forceNewThread; + private boolean _refuseReplies; + private boolean _wasEncrypted; + private boolean _wasPBEncrypted; + private boolean _wasPrivate; + private boolean _wasAuthorized; + private boolean _wasAuthenticated; + /** prompt is only listed if the message could not be decrypted */ + private String _passphrasePrompt; + /** readKeyUnknown is only set if the message could not be decrypted and no prompt was specified */ + private boolean _readKeyUnknown; + private boolean _replyKeyUnknown; + private boolean _isCancelled; + private long _expiration; + /** list of SyndieURI instances this message replies to, most recent first */ + private List _hierarchy; + /** set of tags (String) that are hidden in the message */ + private Set _privateTags; + /** set of tags (String) that are publicly visible */ + private Set _publicTags; + private int _attachmentCount; + private int _pageCount; + /** list of ReferenceNode roots attached to the message (does not include parsed data from pages or attachments) */ + private List _references; + + /** Creates a new instance of MessageInfo */ + public MessageInfo() { + _internalId = -1; + _uri = null; + _authorChannelId = -1; + _messageId = -1; + _scopeChannelId = -1; + _targetChannelId = -1; + _targetChannel = null; + _subject = null; + _overwriteChannel = null; + _overwriteMessage = -1; + _forceNewThread = false; + _refuseReplies = false; + _wasEncrypted = false; + _wasPBEncrypted = false; + _wasPrivate = false; + _wasAuthorized = false; + _wasAuthenticated = false; + _passphrasePrompt = null; + _readKeyUnknown = false; + _replyKeyUnknown = false; + _isCancelled = false; + _expiration = -1; + _hierarchy = Collections.EMPTY_LIST; + _privateTags = Collections.EMPTY_SET; + _publicTags = Collections.EMPTY_SET; + _references = Collections.EMPTY_LIST; + _attachmentCount = 0; + _pageCount = 0; + } + + public long getInternalId() { return _internalId; } + public void setInternalId(long internalId) { _internalId = internalId; } + public SyndieURI getURI() { return _uri; } + public void setURI(SyndieURI uri) { _uri = uri; } + public long getAuthorChannelId() { return _authorChannelId; } + public void setAuthorChannelId(long id) { _authorChannelId = id; } + public long getMessageId() { return _messageId; } + public void setMessageId(long messageId) { _messageId = messageId; } + /** channel that the messageId is unique within */ + public long getScopeChannelId() { return _scopeChannelId; } + public void setScopeChannelId(long scopeChannelId) { _scopeChannelId = scopeChannelId; } + public Hash getScopeChannel() { return _uri.getScope(); } + public long getTargetChannelId() { return _targetChannelId; } + public void setTargetChannelId(long targetChannelId) { _targetChannelId = targetChannelId; } + public Hash getTargetChannel() { return _targetChannel; } + public void setTargetChannel(Hash targetChannel) { _targetChannel = targetChannel; } + public String getSubject() { return _subject; } + public void setSubject(String subject) { _subject = subject; } + public Hash getOverwriteChannel() { return _overwriteChannel; } + public void setOverwriteChannel(Hash overwriteChannel) { _overwriteChannel = overwriteChannel; } + public long getOverwriteMessage() { return _overwriteMessage; } + public void setOverwriteMessage(long overwriteMessage) { _overwriteMessage = overwriteMessage; } + public boolean getForceNewThread() { return _forceNewThread; } + public void setForceNewThread(boolean forceNewThread) { _forceNewThread = forceNewThread; } + public boolean getRefuseReplies() { return _refuseReplies; } + public void setRefuseReplies(boolean refuseReplies) { _refuseReplies = refuseReplies; } + /** + * was this post normally encrypted (true) or was the body encryption key + * publicized (false) - effectively making it unencrypted + */ + public boolean getWasEncrypted() { return _wasEncrypted; } + public void setWasEncrypted(boolean wasEncrypted) { _wasEncrypted = wasEncrypted; } + public boolean getWasPassphraseProtected() { return _wasPBEncrypted; } + public void setWasPassphraseProtected(boolean pbe) { _wasPBEncrypted = pbe; } + /** + * was this post encrypted to the channel's reply encryption key (true), as opposed to + * a normal post on the channel encrypted with the channel read key (false) + */ + public boolean getWasPrivate() { return _wasPrivate; } + public void setWasPrivate(boolean wasPrivate) { _wasPrivate = wasPrivate; } + /** was the post signed by an authorized key */ + public boolean getWasAuthorized() { return _wasAuthorized; } + public void setWasAuthorized(boolean wasAuthorized) { _wasAuthorized = wasAuthorized; } + /** was the post's author specified (or implied) and did they authenticate that identity */ + public boolean getWasAuthenticated() { return _wasAuthenticated;} + public void setWasAuthenticated(boolean wasAuthenticated) { _wasAuthenticated = wasAuthenticated; } + /** has the post been cancelled by an authorized person (the original author or managers on the channel it was posted to) */ + public boolean getIsCancelled() { return _isCancelled; } + public void setIsCancelled(boolean isCancelled) { _isCancelled = isCancelled; } + /** when the post should be discarded (or -1 if never) */ + public long getExpiration() { return _expiration; } + public void setExpiration(long expiration) { _expiration = expiration; } + /** list of SyndieURI instances this message replies to, most recent first */ + public List getHierarchy() { return _hierarchy; } + public void setHierarchy(List hierarchy) { _hierarchy = hierarchy; } + /** set of tags (String) */ + public Set getPrivateTags() { return _privateTags; } + public void setPrivateTags(Set privateTags) { _privateTags = privateTags; } + /** set of tags (String) */ + public Set getPublicTags() { return _publicTags; } + public void setPublicTags(Set publicTags) { _publicTags = publicTags; } + public int getAttachmentCount() { return _attachmentCount; } + public void setAttachmentCount(int attachmentCount) { _attachmentCount = attachmentCount; } + public int getPageCount() { return _pageCount; } + public void setPageCount(int pageCount) { _pageCount = pageCount; } + /** list of ReferenceNode roots attached to the message (does not include parsed data from pages or attachments) */ + public List getReferences() { return _references; } + public void setReferences(List refs) { _references = refs; } + /** if specified, the post was imported, but we didn't have the passphrase */ + public String getPassphrasePrompt() { return _passphrasePrompt; } + public void setPassphrasePrompt(String prompt) { _passphrasePrompt = prompt; } + public boolean getReadKeyUnknown() { return _readKeyUnknown; } + public void setReadKeyUnknown(boolean isUnknown) { _readKeyUnknown = isUnknown; } + public boolean getReplyKeyUnknown() { return _replyKeyUnknown; } + public void setReplyKeyUnknown(boolean isUnknown) { _replyKeyUnknown = isUnknown; } + + public boolean equals(Object o) { return ((MessageInfo)o)._internalId == _internalId; } + public int hashCode() { return (int)_internalId; } + public String toString() { + StringBuffer buf = new StringBuffer(); + buf.append("Message ").append(_internalId).append(":\n"); + buf.append("Posted on "); + if (_targetChannel != null) + buf.append(_targetChannel.toBase64()).append(" "); + buf.append("(internal channel id: ").append(_targetChannelId).append(")\n"); + buf.append("Channel messageId: ").append(_messageId).append("\n"); + if ( (_overwriteChannel != null) && (_overwriteMessage >= 0) ) + buf.append("Overwriting ").append(_overwriteChannel.toBase64()).append(":").append(_overwriteMessage).append("\n"); + if (_authorChannelId >= 0) + buf.append("Author: ").append(_authorChannelId).append("\n"); + if (_subject != null) + buf.append("Subject: ").append(_subject).append("\n"); + buf.append("Force this message onto a new thread? ").append(_forceNewThread).append("\n"); + buf.append("Force replies to use their own thread? ").append(_refuseReplies).append("\n"); + buf.append("Was the post readable to anyone? ").append(!_wasEncrypted && !_wasPBEncrypted).append("\n"); + buf.append("Was the post passphrase protected? ").append(_wasPBEncrypted).append("\n"); + buf.append("Was the message encrypted to the channel's reply key? ").append(_wasPrivate).append("\n"); + buf.append("Was the message signed by an authorized user? ").append(_wasAuthorized).append("\n"); + buf.append("Was the author specified and authenticated? ").append(_wasAuthenticated).append("\n"); + buf.append("Was the message (subsequently) cancelled by an authorized user? ").append(_isCancelled).append("\n"); + if (_expiration <= 0) + buf.append("Message expiration: never\n"); + else + buf.append("Message expiration: ").append(new Date(_expiration)).append("\n"); + if ( (_hierarchy != null) && (_hierarchy.size() > 0) ) { + buf.append("This message replies to: "); + for (int i = 0; i < _hierarchy.size(); i++) { + SyndieURI uri = (SyndieURI)_hierarchy.get(i); + buf.append(uri.toString()); + if (i + 1 < _hierarchy.size()) + buf.append(", "); + else + buf.append("\n"); + } + } + if ( (_publicTags != null) && (_publicTags.size() > 0) ) + buf.append("Publicly visible tags on the message: ").append(_publicTags).append("\n"); + if ( (_privateTags != null) && (_privateTags.size() > 0) ) + buf.append("Hidden tags on the message: ").append(_privateTags).append("\n"); + buf.append("Pages in the message: ").append(_pageCount).append("\n"); + buf.append("Attachments in the message: ").append(_attachmentCount).append("\n"); + buf.append("References in the message: ").append(_references.size()).append("\n"); + return buf.toString(); + } +} diff --git a/src/syndie/data/NymKey.java b/src/syndie/data/NymKey.java new file mode 100644 index 0000000..cace871 --- /dev/null +++ b/src/syndie/data/NymKey.java @@ -0,0 +1,39 @@ +package syndie.data; + +import net.i2p.data.*; + +public class NymKey { + private Hash _channel; + private byte _data[]; + private String _dataHash; + private boolean _authenticated; + private String _function; + private String _type; + private long _nymId; + public NymKey(String type, byte data[], boolean authenticated, String function, long nymId, Hash channel) { + this(type, data, null, authenticated, function, nymId, channel); + } + public NymKey(String type, byte data[], String dataHash, boolean authenticated, String function, long nymId, Hash channel) { + _channel = channel; + _data = data; + _dataHash = dataHash; + _authenticated = authenticated; + _function = function; + _type = type; + _nymId = nymId; + } + public byte[] getData() { return _data; } + /** DSA/ElGamal2048/AES256, etc */ + public String getType() { return _type; } + /** do we know it is a valid key for the channel? */ + public boolean getAuthenticated() { return _authenticated; } + /** read/post/manage/reply, etc */ + public String getFunction() { return _function; } + /** nym that knows this key */ + public long getNymId() { return _nymId; } + public Hash getChannel() { return _channel; } + public String toString() { + return _function + " for " + _channel.toBase64() + " " + Base64.encode(_data) + + (_dataHash != null ? " / " + _dataHash : "") + " (" + _authenticated + ")"; + } +} diff --git a/src/syndie/data/ReferenceNode.java b/src/syndie/data/ReferenceNode.java new file mode 100644 index 0000000..5eaf336 --- /dev/null +++ b/src/syndie/data/ReferenceNode.java @@ -0,0 +1,280 @@ +package syndie.data; + +import java.io.ByteArrayInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.net.URISyntaxException; +import java.util.ArrayList; +import java.util.List; +import net.i2p.data.DataHelper; + +/** + * tree structure referencing resources + */ +public class ReferenceNode { + private String _name; + private SyndieURI _uri; + private String _description; + private String _refType; + protected List _children; + protected ReferenceNode _parent; + /** + * contains the node's index in a tree of nodes. For instance, "1.3.2.15" + * means this is the 15th child of the node "1.3.2", which is the 2nd child + * of the node "1.3", which is the 3rd child of the root node ("1") + */ + protected String _treeIndex; + /** sequential index in the walk (unique within the tree, but not a descriptive location) */ + private int _treeIndexNum; + + public ReferenceNode(String name, SyndieURI uri, String description, String type) { + _name = name; + _uri = uri; + _description = description; + _refType = type; + _children = new ArrayList(); + _parent = null; + _treeIndex = "1"; + _treeIndexNum = -1; + } + + public String getName() { return _name; } + public SyndieURI getURI() { return _uri; } + public String getDescription() { return _description; } + public String getReferenceType() { return _refType; } + public int getChildCount() { return _children.size(); } + public ReferenceNode getChild(int index) { return (ReferenceNode)_children.get(index); } + public ReferenceNode getParent() { return _parent; } + public String getTreeIndex() { return _treeIndex; } + public int getTreeIndexNum() { return _treeIndexNum; } + + public void setName(String name) { _name = name; } + public void setURI(SyndieURI uri) { _uri = uri; } + public void setDescription(String desc) { _description = desc; } + public void setReferenceType(String type) { _refType = type; } + public void setTreeIndexNum(int num) { _treeIndexNum = num; } + + public ReferenceNode addChild(String name, SyndieURI uri, String description, String type) { + ReferenceNode rv = new ReferenceNode(name, uri, description, type); + rv._parent = this; + rv._treeIndex = _treeIndex + "." + (_children.size()+1); + //System.out.println("Add new child [" + rv._treeIndex + "/" + name + "] to " + _treeIndex + "/" + name); + _children.add(rv); + return rv; + } + + public void addChild(ReferenceNode ref) { + ref._parent = this; + if (!_children.contains(ref)) { + ref._treeIndex = _treeIndex + "." + (_children.size()+1); + //System.out.println("Add child [" + ref._treeIndex + "/" + ref.getName() + "] to " + _treeIndex + "/" + _name + " (#kids: " + _children.size() + " instance: " + System.identityHashCode(this)); + _children.add(ref); + } else { + //System.out.println("child already added: " + ref._treeIndex + "/" + ref.getName() + " to " + _treeIndex + "/" + _name); + } + } + public void removeChild(ReferenceNode child) { + _children.remove(child); + // does not reindex! + child._parent = null; + } + + /** + * return the roots of the tree, as parsed from the given input stream. The format is + * simple: + * "[\t]*$name\t$uri\t$refType\t$description\n" + * the tab indentation at the beginning of the line determines the tree structure, such as + * + * rootName\t\t\tfirst grouping + * \tchildName\t\t\t + * \tsecondChild\t\t\t + * \t\tchildOfSecondChild\t\t\t + * secondRoot\t\t\t + * thirdRoot\t\t\t + * \tchildOfThirdRoot\t\t\t + * + * etc + */ + public static List buildTree(InputStream treeData) { + int index = 0; + List rv = new ArrayList(); + ReferenceNode prevNode = null; + try { + StringBuffer buf = new StringBuffer(256); + while (DataHelper.readLine(treeData, buf)) { + int indentation = 0; + int nameEnd = -1; + int uriEnd = -1; + int refTypeEnd = -1; + + for (int i = 0; i < buf.length(); i++) { + if (buf.charAt(i) == '\t') + indentation++; + else + break; + } + for (int i = indentation; i < buf.length(); i++) { + if (buf.charAt(i) == '\t') { + if (nameEnd == -1) + nameEnd = i; + else if (uriEnd == -1) + uriEnd = i; + else if (refTypeEnd == -1) + refTypeEnd = i; + } + } + String name = null; + if ((nameEnd)-(indentation) > 0) + name = buf.substring(indentation, nameEnd); + String uri = null; + if ((uriEnd)-(nameEnd+1) > 0) + uri = buf.substring(nameEnd+1, uriEnd); + SyndieURI suri = null; + if (uri != null) { + try { + suri = new SyndieURI(uri); + } catch (URISyntaxException use) { + suri = null; + } + } + String refType = null; + if ((refTypeEnd)-(uriEnd+1) > 0) + refType = buf.substring(uriEnd+1, refTypeEnd); + String desc = null; + if ((buf.length())-(refTypeEnd+1) > 0) + desc = buf.substring(refTypeEnd+1).trim(); + + // ok, now to interpret + if ( (indentation == 0) || (prevNode == null) ) { + ReferenceNode node = new ReferenceNode(name, suri, desc, refType); + prevNode = node; + node._treeIndex = (""+rv.size() + 1); + //System.out.println("Create new [" + node._treeIndex + "/" + name + "]"); + node._treeIndexNum = index++; + rv.add(node); + } else { + int height = -1; + ReferenceNode cur = prevNode; + while (cur != null) { + cur = cur.getParent(); + height++; + } + if (indentation > height) { // child of the prev node + prevNode = prevNode.addChild(name, suri, desc, refType); + prevNode._treeIndexNum = index++; + } else if (indentation == height) { // sibling of the prev node + prevNode = prevNode.getParent().addChild(name, suri, desc, refType); + prevNode._treeIndexNum = index++; + } else { // uncle/great-uncle/etc + int diff = height-indentation; + for (int i = 0; i < diff; i++) + prevNode = prevNode.getParent(); + prevNode = prevNode.addChild(name, suri, desc, refType); + prevNode._treeIndexNum = index++; + } + } + buf.setLength(0); + } + } catch (IOException ioe) { + // ignore + } + return rv; + } + + public String toString() { + StringBuffer buf = new StringBuffer(); + append(buf, this, 0); + return buf.toString(); + } + + /** stringify a forest of nodes into a format that can be parsed with buildTree() */ + public static String walk(List roots) { + StringBuffer walked = new StringBuffer(); + for (int i = 0; i < roots.size(); i++) { + ReferenceNode node = (ReferenceNode)roots.get(i); + append(walked, node, 0); + } + return walked.toString(); + } + + /** depth first traversal */ + public static void walk(List roots, Visitor visitor) { + for (int i = 0; i < roots.size(); i++) { + ReferenceNode node = (ReferenceNode)roots.get(i); + node.walk(visitor, 0, i); + } + } + private void walk(Visitor visitor, int depth, int siblingOrder) { + visitor.visit(this, depth, siblingOrder); + for (int i = 0; i < _children.size(); i++) { + ReferenceNode child = (ReferenceNode)_children.get(i); + child.walk(visitor, depth+1, i); + } + } + + public interface Visitor { + public void visit(ReferenceNode node, int depth, int siblingOrder); + } + + public static void main(String args[]) { + test(TEST_TREE1); + test(TEST_TREE2); + test(TEST_TREE3); + } + + private static void test(String treeContent) { + List tree = ReferenceNode.buildTree(new ByteArrayInputStream(DataHelper.getUTF8(treeContent))); + StringBuffer walked = new StringBuffer(treeContent.length()); + for (int i = 0; i < tree.size(); i++) { + ReferenceNode node = (ReferenceNode)tree.get(i); + append(walked, node, 0); + } + if (walked.toString().equals(treeContent)) + System.out.println("Trees match: \n" + treeContent); + else + System.out.println("Trees do not match: tree content = \n" + treeContent + "\n\nwalked = \n" + walked.toString()); + } + + private static void append(StringBuffer walked, ReferenceNode node, int indent) { + for (int i = 0; i < indent; i++) + walked.append('\t'); + if (node.getName() != null) + walked.append(node.getName()); + walked.append('\t'); + if (node.getURI() != null) + walked.append(node.getURI().toString()); + walked.append('\t'); + if (node.getReferenceType() != null) + walked.append(node.getReferenceType()); + walked.append('\t'); + if (node.getDescription() != null) + walked.append(node.getDescription()); + walked.append('\n'); + for (int i = 0; i < node.getChildCount(); i++) + append(walked, node.getChild(i), indent+1); + } + + private static final String TEST_TREE1 = "rootName\t\t\tfirst grouping\n" + + "\tchildName\t\t\t\n" + + "\tsecondChild\t\t\t\n" + + "\t\tchildOfSecondChild\t\t\t\n" + + "secondRoot\t\t\t\n" + + "thirdRoot\t\t\t\n" + + "\tchildOfThirdRoot\t\t\t\n"; + + private static final String TEST_TREE2 = "rootName\t\tfirstType\tfirst grouping\n" + + "\tchildName\t\tsecondType\t\n" + + "\tsecondChild\t\tthirdType\t\n" + + "\t\tchildOfSecondChild\t\tfourthType\t\n" + + "s\t\ta\td\n" + + "thirdRoot\t\t\t\n" + + "\tchildOfThirdRoot\t\t\t\n"; + + private static final String TEST_TREE3 = "rootName\t\tfirstType\tfirst grouping\n" + + "\tchildName\t\tsecondType\t\n" + + "\tsecondChild\t\tthirdType\t\n" + + "\t\tchildOfSecondChild\t\tfourthType\t\n" + + "s\turn:syndie:dummy:de\ta\td\n" + + "thirdRoot\t\t\t\n" + + "\tchildOfThirdRoot\t\t\t\n\t\t\t\t\t\n"; +} diff --git a/src/syndie/data/SyndieURI.java b/src/syndie/data/SyndieURI.java new file mode 100644 index 0000000..68c29eb --- /dev/null +++ b/src/syndie/data/SyndieURI.java @@ -0,0 +1,500 @@ +package syndie.data; + +import java.lang.reflect.Array; +import java.net.URISyntaxException; +import java.util.*; +import net.i2p.data.*; +import syndie.Constants; + +/** + * Maintain a reference within syndie per the syndie URN spec, including canonical + * encoding and decoding + * + */ +public class SyndieURI { + private TreeMap _attributes; + private String _type; + private transient String _stringified; + + public SyndieURI(String encoded) throws URISyntaxException { + fromString(encoded); + } + public SyndieURI(String type, TreeMap attributes) { + if ( (type == null) || (type.trim().length() <= 0) || (attributes == null) ) + throw new IllegalArgumentException("Invalid attributes or type"); + _type = type; + _attributes = attributes; + } + public SyndieURI(String type, Map attributes) { + this(type, new TreeMap(attributes)); + } + + public static SyndieURI createSearch(String searchString) { + String searchURI = "urn:syndie:search:d7:keyword" + searchString.length() + ":" + searchString + "e"; + try { + return new SyndieURI(searchURI); + } catch (URISyntaxException use) { + throw new RuntimeException("Hmm, encoded search URI is not valid: " + use.getMessage() + " [" + searchURI + "]"); + } + } + + public static SyndieURI createURL(String url) { + StringBuffer buf = new StringBuffer(); + buf.append("urn:syndie:url:d"); + if (url != null) + buf.append("3:url").append(url.length()).append(":").append(url); + buf.append("e"); + try { + return new SyndieURI(buf.toString()); + } catch (URISyntaxException use) { + System.err.println("attempted: " + buf.toString()); + use.printStackTrace(); + return null; + } + } + public static SyndieURI createArchive(String url, String pass) { + StringBuffer buf = new StringBuffer(); + buf.append("urn:syndie:archive:d"); + if (url != null) + buf.append("3:url").append(url.length()).append(':').append(url); + if (pass != null) { + buf.append("11:postKeyType4:pass11:postKeyData"); + String base64Pass = Base64.encode(DataHelper.getUTF8(pass)); + buf.append(base64Pass.length()).append(':').append(base64Pass); + } + buf.append("e"); + try { + return new SyndieURI(buf.toString()); + } catch (URISyntaxException use) { + System.err.println("attempted: " + buf.toString()); + use.printStackTrace(); + return null; + } + } + public static SyndieURI createScope(Hash scope) { return createMessage(scope, -1, -1); } + public static SyndieURI createMessage(Hash scope, long msgId) { return createMessage(scope, msgId, -1); } + public static SyndieURI createMessage(Hash scope, long msgId, int pageNum) { + StringBuffer buf = new StringBuffer(); + buf.append("urn:syndie:channel:d"); + if (scope != null) { + buf.append("7:channel"); + String ch = scope.toBase64(); + buf.append(ch.length()).append(':').append(ch); + if (msgId >= 0) { + buf.append("9:messageIdi").append(msgId).append("e"); + if (pageNum >= 0) + buf.append("4:pagei").append(pageNum).append("e"); + } + } + buf.append('e'); + try { + return new SyndieURI(buf.toString()); + } catch (URISyntaxException use) { + System.err.println("attempted: " + buf.toString()); + use.printStackTrace(); + return null; + } + } + + + /** + * Create a URI that includes the given read key for the specified channel + */ + public static SyndieURI createKey(Hash scope, SessionKey sessionKey) { + StringBuffer buf = new StringBuffer(); + buf.append("urn:syndie:channel:d"); + if (scope != null) { + buf.append("7:channel"); + String ch = scope.toBase64(); + buf.append(ch.length()).append(':').append(ch); + buf.append("7:readKey"); + ch = Base64.encode(sessionKey.getData()); + buf.append(ch.length()).append(':').append(ch); + } + buf.append('e'); + try { + return new SyndieURI(buf.toString()); + } catch (URISyntaxException use) { + System.err.println("attempted: " + buf.toString()); + use.printStackTrace(); + return null; + } + } + + /** + * Create a URI that includes the given post or manage key for the specified channel + */ + public static SyndieURI createKey(Hash scope, String function, SigningPrivateKey priv) { + StringBuffer buf = new StringBuffer(); + buf.append("urn:syndie:channel:d"); + if (scope != null) { + buf.append("7:channel"); + String ch = scope.toBase64(); + buf.append(ch.length()).append(':').append(ch); + if (function.equalsIgnoreCase(Constants.KEY_FUNCTION_POST)) + buf.append("7:postKey"); + else if (function.equalsIgnoreCase(Constants.KEY_FUNCTION_MANAGE)) + buf.append("9:manageKey"); + ch = Base64.encode(priv.getData()); + buf.append(ch.length()).append(':').append(ch); + } + buf.append('e'); + try { + return new SyndieURI(buf.toString()); + } catch (URISyntaxException use) { + System.err.println("attempted: " + buf.toString()); + use.printStackTrace(); + return null; + } + } + + /** + * Create a URI that includes the private key to decrypt replies for the channel + */ + public static SyndieURI createKey(Hash scope, PrivateKey priv) { + StringBuffer buf = new StringBuffer(); + buf.append("urn:syndie:channel:d"); + if (scope != null) { + buf.append("7:channel"); + String ch = scope.toBase64(); + buf.append(ch.length()).append(':').append(ch); + buf.append("8:replyKey"); + ch = Base64.encode(priv.getData()); + buf.append(ch.length()).append(':').append(ch); + } + buf.append('e'); + try { + return new SyndieURI(buf.toString()); + } catch (URISyntaxException use) { + System.err.println("attempted: " + buf.toString()); + use.printStackTrace(); + return null; + } + } + + private static final String TYPE_URL = "url"; + private static final String TYPE_CHANNEL = "channel"; + private static final String TYPE_ARCHIVE = "archive"; + private static final String TYPE_TEXT = "text"; + + /** does this this URI maintain a reference to a URL? */ + public boolean isURL() { return TYPE_URL.equals(_type); } + /** does this this URI maintain a reference to a syndie channel/message/page/attachment? */ + public boolean isChannel() { return TYPE_CHANNEL.equals(_type); } + /** does this this URI maintain a reference to a syndie archive? */ + public boolean isArchive() { return TYPE_ARCHIVE.equals(_type); } + /** does this this URI maintain a reference to a URL? */ + public boolean isText() { return TYPE_TEXT.equals(_type); } + + public String getType() { return _type; } + public Map getAttributes() { return _attributes; } + + public String getString(String key) { return (String)_attributes.get(key); } + public Long getLong(String key) { return (Long)_attributes.get(key); } + public String[] getStringArray(String key) { return (String[])_attributes.get(key); } + public boolean getBoolean(String key, boolean defaultVal) { + Object o = _attributes.get(key); + if (o == null) return defaultVal; + if (o instanceof Boolean) + return ((Boolean)o).booleanValue(); + String str = o.toString(); + if (str == null) + return defaultVal; + else + return Boolean.valueOf(str).booleanValue(); + } + public Hash getScope() { return getHash("channel"); } + private Hash getHash(String key) { + String val = (String)_attributes.get(key); + if (val != null) { + byte b[] = Base64.decode(val); + if ( (b != null) && (b.length == Hash.HASH_LENGTH) ) + return new Hash(b); + } + return null; + } + public SessionKey getReadKey() { + byte val[] = getBytes("readKey"); + if ( (val != null) && (val.length == SessionKey.KEYSIZE_BYTES) ) + return new SessionKey(val); + else + return null; + } + public SigningPrivateKey getPostKey() { + byte val[] = getBytes("postKey"); + if ( (val != null) && (val.length == SigningPrivateKey.KEYSIZE_BYTES) ) + return new SigningPrivateKey(val); + else + return null; + } + public SigningPrivateKey getManageKey() { + byte val[] = getBytes("manageKey"); + if ( (val != null) && (val.length == SigningPrivateKey.KEYSIZE_BYTES) ) + return new SigningPrivateKey(val); + else + return null; + } + public PrivateKey getReplyKey() { + byte val[] = getBytes("replyKey"); + if ( (val != null) && (val.length == PrivateKey.KEYSIZE_BYTES) ) + return new PrivateKey(val); + else + return null; + } + private byte[] getBytes(String key) { + String val = (String)_attributes.get(key); + if (val != null) + return Base64.decode(val); + else + return null; + } + public Long getMessageId() { return getLong("messageId"); } + + public void fromString(String bencodedURI) throws URISyntaxException { + if (bencodedURI == null) throw new URISyntaxException("null URI", "no uri"); + if (bencodedURI.startsWith("urn:syndie:")) + bencodedURI = bencodedURI.substring("urn:syndie:".length()); + int endType = bencodedURI.indexOf(':'); + if (endType <= 0) + throw new URISyntaxException(bencodedURI, "Missing type"); + if (endType >= bencodedURI.length()) + throw new URISyntaxException(bencodedURI, "No bencoded attributes"); + _type = bencodedURI.substring(0, endType); + bencodedURI = bencodedURI.substring(endType+1); + _attributes = bdecode(bencodedURI); + if (_attributes == null) { + throw new URISyntaxException(bencodedURI, "Invalid bencoded attributes"); + } + } + public String toString() { + if (_stringified == null) + _stringified = "urn:syndie:" + _type + ":" + bencode(_attributes); + return _stringified; + } + + public boolean equals(Object obj) { return toString().equals(obj.toString()); } + public int hashCode() { return toString().hashCode(); } + + public static void main(String args[]) { test(); } + private static void test() { + try { + new SyndieURI("urn:syndie:channel:d7:channel40:12345678901234567890123456789012345678908:showRefs4:truee"); + } catch (Exception e) { + e.printStackTrace(); + return; + } + if (!test(new TreeMap())) + throw new RuntimeException("failed on empty"); + if (!test(createStrings())) + throw new RuntimeException("failed on strings"); + if (!test(createList())) + throw new RuntimeException("failed on list"); + if (!test(createMixed())) + throw new RuntimeException("failed on mixed"); + if (!test(createMultiMixed())) + throw new RuntimeException("failed on multimixed"); + System.out.println("Passed all tests"); + } + private static TreeMap createStrings() { + TreeMap m = new TreeMap(); + for (int i = 0; i < 64; i++) + m.put("key" + i, "val" + i); + return m; + } + private static TreeMap createList() { + TreeMap m = new TreeMap(); + for (int i = 0; i < 8; i++) + m.put("key" + i, "val" + i); + String str[] = new String[] { "stringElement1", "stringElement2", "stringElement3" }; + m.put("stringList", str); + return m; + } + private static TreeMap createMixed() { + TreeMap m = new TreeMap(); + for (int i = 0; i < 8; i++) + m.put("key" + i, "val" + i); + String str[] = new String[] { "stringElement1", "stringElement2", "stringElement3" }; + m.put("stringList", str); + for (int i = 8; i < 16; i++) + m.put("intKey" + i, (i%2==0?(Number)(new Long(i)):(Number)(new Integer(i)))); + return m; + } + private static TreeMap createMultiMixed() { + TreeMap m = new TreeMap(); + for (int i = 0; i < 8; i++) + m.put("key" + i, "val" + i); + for (int i = 0; i < 10; i++) { + String str[] = new String[] { "stringElement1", "stringElement2", "stringElement3" }; + m.put("stringList" + i, str); + } + for (int i = 8; i < 16; i++) + m.put("intKey" + i, (i%2==0?(Number)(new Long(i)):(Number)(new Integer(i)))); + return m; + } + private static boolean test(TreeMap orig) { + String enc = bencode(orig); + System.out.println("bencoded: " + enc); + TreeMap decoded = null; + try { + decoded = bdecode(enc); + } catch (URISyntaxException use) { + use.printStackTrace(); + } + if (decoded == null) return false; + Set origKeys = new HashSet(orig.keySet()); + Set decKeys = new HashSet(decoded.keySet()); + if (origKeys.equals(decKeys)) { + for (Iterator iter = origKeys.iterator(); iter.hasNext(); ) { + String k = (String)iter.next(); + Object origVal = orig.get(k); + Object decVal = decoded.get(k); + if (origVal.getClass().isArray()) { + boolean ok = Arrays.equals((String[])origVal, (String[])decVal); + if (!ok) { + System.out.println("key " + k + " is an unequal array"); + return false; + } + } else if (origVal instanceof Number) { + long o = ((Number)origVal).longValue(); + long d = ((Number)decVal).longValue(); + if (d != o) { + System.out.println("key " + k + " is an unequal number: " + d + ", " + o); + } + } else if (!origVal.equals(decVal)) { + System.out.println("key " + k + " does not match (" + origVal + ", " + decVal + ")/(" + origVal.getClass().getName() + ", " + decVal.getClass().getName() + ")"); + return false; + } + } + return true; + } else { + return false; + } + } + + ///// + // remaining is a trivial bencode/bdecode impl, capable only of handling + // what the SyndieURI needs + ///// + + private static final String bencode(TreeMap attributes) { + StringBuffer buf = new StringBuffer(64); + buf.append('d'); + for (Iterator iter = attributes.keySet().iterator(); iter.hasNext(); ) { + String key = (String)iter.next(); + buf.append(key.length()).append(':').append(key); + buf.append(bencode(attributes.get(key))); + } + buf.append('e'); + return buf.toString(); + } + + private static final String bencode(Object val) { + if ( (val instanceof Integer) || (val instanceof Long) ) { + return "i" + val.toString() + "e"; + } else if (val.getClass().isArray()) { + StringBuffer buf = new StringBuffer(); + buf.append("l"); + Object vals[] = (Object[])val; + for (int i = 0; i < vals.length; i++) + buf.append(bencode(vals[i])); + buf.append("e"); + return buf.toString(); + } else { + String str = val.toString(); + return String.valueOf(str.length()) + ":" + val; + } + } + + private static final void bdecodeNext(StringBuffer remaining, TreeMap target) throws URISyntaxException { + String key = null; + while (true) { + switch (remaining.charAt(0)) { + case 'l': + List l = new ArrayList(); + boolean ok = true; + remaining.deleteCharAt(0); + while (bdecodeNext(remaining, l)) { + if (remaining.charAt(0) == 'e') { + String str[] = new String[l.size()]; + for (int i = 0; i < str.length; i++) + str[i] = (String)l.get(i); + target.put(key, str); + key = null; + remaining.deleteCharAt(0); + return; + } + } + // decode failed + throw new URISyntaxException(remaining.toString(), "Unterminated list"); + case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': + String str = bdecodeNext(remaining); + if (str == null) { + throw new URISyntaxException(remaining.toString(), "Undecoded string"); + } else if (key == null) { + key = str; + } else { + target.put(key, str); + key = null; + return; + } + break; + case 'i': + remaining.deleteCharAt(0); + int idx = remaining.indexOf("e"); + if (idx < 0) + throw new URISyntaxException(remaining.toString(), "No remaining 'e'"); + try { + String lstr = remaining.substring(0, idx); + long val = Long.parseLong(lstr); + if (key == null) + throw new URISyntaxException(remaining.toString(), "Numbers cannot be syndie uri keys"); + target.put(key, new Long(val)); + key = null; + remaining.delete(0, idx+1); + return; + } catch (NumberFormatException nfe) { + throw new URISyntaxException(remaining.toString(), "Invalid number format: " + nfe.getMessage()); + } + default: + throw new URISyntaxException(remaining.toString(), "Unsupported bencoding type"); + } + } + } + private static final boolean bdecodeNext(StringBuffer remaining, List target) { + String str = bdecodeNext(remaining); + if (str == null) return false; + target.add(str); + return true; + } + private static final String bdecodeNext(StringBuffer remaining) { + int br = remaining.indexOf(":"); + if (br <= 0) + return null; + String len = remaining.substring(0, br); + try { + int sz = Integer.parseInt(len); + remaining.delete(0, br+1); + String val = remaining.substring(0, sz); + remaining.delete(0, sz); + return val; + } catch (NumberFormatException nfe) { + return null; + } + } + /** + * bdecode the subset of bencoded data we require. The bencoded string must + * be a single dictionary and contain either strings, integers, or lists of + * strings. + */ + private static final TreeMap bdecode(String bencoded) throws URISyntaxException { + if ( (bencoded.charAt(0) != 'd') || (bencoded.charAt(bencoded.length()-1) != 'e') ) + throw new URISyntaxException(bencoded, "Not bencoded properly"); + StringBuffer buf = new StringBuffer(bencoded); + buf.deleteCharAt(0); + buf.deleteCharAt(buf.length()-1); + TreeMap rv = new TreeMap(); + while (buf.length() > 0) + bdecodeNext(buf, rv); + return rv; + } +} diff --git a/src/syndie/db/ArchiveChannel.java b/src/syndie/db/ArchiveChannel.java new file mode 100644 index 0000000..b35ad28 --- /dev/null +++ b/src/syndie/db/ArchiveChannel.java @@ -0,0 +1,231 @@ +package syndie.db; + +import java.io.*; +import java.util.*; +import net.i2p.data.*; + +/** + * describes a channel and all its messages as viewed from one particular index + */ +public class ArchiveChannel { + private byte[] _scope; + private long _metaVersion; + private long _receiveDate; + private List _messageEntries; + private List _pseudoAuthorizedMessages; + private List _unauthMessageEntries; + private long _knownMessageCount; + private long _entrySize; + private UI _ui; + + public ArchiveChannel(UI ui) { + _ui = ui; + _messageEntries = null; + _pseudoAuthorizedMessages = null; + _unauthMessageEntries = null; + _knownMessageCount = -1; + } + public byte[] getScope() { return _scope; } + public long getVersion() { return _metaVersion; } + public long getReceiveDate() { return _receiveDate; } + public long getEntrySize() { return _entrySize; } + /** how many messages do we have MessageEntry values for */ + public int getMessageCount() { return (_messageEntries != null ? _messageEntries.size() : 0); } + /** how many messages does the archive know in the channel in total, even if its not referenced here */ + public long getKnownMessageCount() { + if ( (_knownMessageCount < 0) && (_messageEntries != null) ) + _knownMessageCount = _messageEntries.size(); + return _knownMessageCount; + } + public ArchiveMessage getMessage(int index) { return (ArchiveMessage)_messageEntries.get(index); } + /** + * messages that are not authorized at all, not even though channel specific criteria + */ + public int getUnauthorizedMessageCount() { return (_unauthMessageEntries != null ? _unauthMessageEntries.size() : 0); } + public ArchiveMessage getUnauthorizedMessage(int index) { return (ArchiveMessage)_unauthMessageEntries.get(index); } + /** + * messages that wouldn't typically be authorized, but met some channel specific criteria allowing + * it to be included, such as "allow replies" and the post is a reply to a normally authorized message + */ + public int getPseudoAuthorizedMessageCount() { return (_pseudoAuthorizedMessages != null ? _pseudoAuthorizedMessages.size() : 0); } + public ArchiveMessage getPseudoAuthorizedMessage(int index) { return (ArchiveMessage)_pseudoAuthorizedMessages.get(index); } + + void setScope(byte scope[]) { _scope = scope; } + void setVersion(long version) { _metaVersion = version; } + void setReceiveDate(long when) { _receiveDate = when; } + void setMessages(List messages) { _messageEntries = messages; } + void setPseudoAuthorizedMessages(List messages) { _pseudoAuthorizedMessages = messages; } + void setUnauthorizedMessages(List messages) { _unauthMessageEntries = messages; } + void setEntrySize(long size) { _entrySize = size; } + + public void write(OutputStream out, boolean newOnly, boolean chanOnly, boolean includeUnauthorized) throws IOException { + try { + _ui.debugMessage("Writing channel " + Base64.encode(getScope()) + " (new? " + newOnly + " meta? " + chanOnly + " unauthorized? " + includeUnauthorized + ")"); + //$scopeHash + out.write(getScope()); + //$metaVersion + DataHelper.writeLong(out, 4, getVersion()); + //$recvDate + DataHelper.writeLong(out, 4, getReceiveDate()/24*60*60*1000l); + //$metadataEntrySize + DataHelper.writeLong(out, 4, getEntrySize()); + //$numMessages + DataHelper.writeLong(out, 4, getMessageCount()); + + if (chanOnly) { + // subsequent messages + DataHelper.writeLong(out, 4, 0); + // unauthorized/pseudoauthorized messages + DataHelper.writeLong(out, 4, 0); + } else { + //foreach (message) + int numToWrite = getMessageCount(); + if (includeUnauthorized) { + DataHelper.writeLong(out, 4, 0); + } else { + if (newOnly) { + numToWrite = 0; + for (int j = 0; j < getMessageCount(); j++) { + ArchiveMessage msg = getMessage(j); + if (msg.getIsNew()) + numToWrite++; + } + } + DataHelper.writeLong(out, 4, numToWrite); + _ui.debugMessage("Including fully authorized messages: " + numToWrite); + for (int j = 0; !includeUnauthorized && j < getMessageCount(); j++) { + ArchiveMessage msg = getMessage(j); + // $messageId + // $recvDate + // $entrySize + // $flags {authorized|isReply|isPBE} + if (msg.getIsNew() || !newOnly) { + DataHelper.writeLong(out, 8, msg.getMessageId()); + DataHelper.writeLong(out, 4, msg.getReceiveDate()/24*60*60*1000l); + DataHelper.writeLong(out, 4, msg.getEntrySize()); + DataHelper.writeLong(out, 1, msg.getFlags()); + _ui.debugMessage("\t" + msg.getPrimaryScope().toBase64() + ":" + msg.getMessageId()); + } + } + } + + // the index either includes unauthorized posts or pseudoauthorized + // posts + Map thirdParty = new HashMap(); + if (includeUnauthorized) { + _ui.debugMessage("Including unauthorized messages: " + getUnauthorizedMessageCount()); + for (int i = 0; i < getUnauthorizedMessageCount(); i++) { + ArchiveMessage msg = getUnauthorizedMessage(i); + if (!msg.getIsNew() && newOnly) + continue; + List msgs = (List)thirdParty.get(msg.getPrimaryScope()); + if (msgs == null) { + msgs = new ArrayList(); + thirdParty.put(msg.getPrimaryScope(), msgs); + } + msgs.add(msg); + } + } else { + _ui.debugMessage("Including pseudoauthorized messages: " + getPseudoAuthorizedMessageCount()); + for (int i = 0; i < getPseudoAuthorizedMessageCount(); i++) { + ArchiveMessage msg = getPseudoAuthorizedMessage(i); + if (!msg.getIsNew() && newOnly) + continue; + List msgs = (List)thirdParty.get(msg.getPrimaryScope()); + if (msgs == null) { + msgs = new ArrayList(); + thirdParty.put(msg.getPrimaryScope(), msgs); + } + msgs.add(msg); + } + } + DataHelper.writeLong(out, 4, thirdParty.size()); + for (Iterator iter = thirdParty.keySet().iterator(); iter.hasNext(); ) { + Hash scope = (Hash)iter.next(); + List msgs = (List)thirdParty.get(scope); + out.write(scope.getData()); + DataHelper.writeLong(out, 4, msgs.size()); + for (int i = 0; i < msgs.size(); i++) { + ArchiveMessage msg = (ArchiveMessage)msgs.get(i); + DataHelper.writeLong(out, 8, msg.getMessageId()); + DataHelper.writeLong(out, 4, msg.getReceiveDate()/24*60*60*1000L); + DataHelper.writeLong(out, 4, msg.getEntrySize()); + DataHelper.writeLong(out, 1, msg.getFlags()); + _ui.debugMessage("\t" + msg.getPrimaryScope().toBase64() + ":" + msg.getMessageId()); + } + } + } + } catch (DataFormatException dfe) { + throw new IOException("Invalid number: " + dfe.getMessage()); + } + } + + public boolean read(InputStream in, boolean includesUnauthorized) throws IOException { + try { + byte scope[] = new byte[32]; + int read = DataHelper.read(in, scope); + if (read <= 0) + return false; + if (read != scope.length) + throw new IOException("Not enough data for the scope (read=" + read + ")"); + Hash scopeHash = new Hash(scope); + long version = DataHelper.readLong(in, 4); + long recvDate = DataHelper.readLong(in, 4)*24*60*60*1000l; + long entrySize = DataHelper.readLong(in, 4); + + long numMsgs = DataHelper.readLong(in, 4); + int subsequent = (int)DataHelper.readLong(in, 4); + for (int i = 0; i < subsequent; i++) { + ArchiveMessage msg = new ArchiveMessage(); + long msgId = DataHelper.readLong(in, 8); + long msgRecv = DataHelper.readLong(in, 4)*24*60*60*1000l; + long msgSize = DataHelper.readLong(in, 4); + int msgFlags = (int)DataHelper.readLong(in, 1); + msg.setPrimaryScope(scopeHash); + msg.setMessageId(msgId); + msg.setReceiveDate(msgRecv); + msg.setEntrySize(msgSize); + msg.setFlags(msgFlags); + if (_messageEntries == null) + _messageEntries = new ArrayList(); + _messageEntries.add(msg); + } + + List thirdParty = new ArrayList(); + int thirdPartyMsgs = (int)DataHelper.readLong(in, 4); + for (int i = 0; i < thirdPartyMsgs; i++) { + byte origScope[] = new byte[32]; + if (32 != DataHelper.read(in, origScope)) + throw new IOException("Not enough data to read the orig scope"); + Hash thirdPartyChan = new Hash(origScope); + int msgs = (int)DataHelper.readLong(in, 4); + for (int j = 0; j < msgs; j++) { + long curMsgId = DataHelper.readLong(in, 8); + long curRecvDate = DataHelper.readLong(in, 4)*24*60*60*1000L; + int curEntrySize = (int)DataHelper.readLong(in, 4); + int curFlags = (int)DataHelper.readLong(in, 1); + ArchiveMessage curMsg = new ArchiveMessage(); + curMsg.setMessageId(curMsgId); + curMsg.setReceiveDate(curRecvDate); + curMsg.setEntrySize(curEntrySize); + curMsg.setFlags(curFlags); + curMsg.setPrimaryScope(thirdPartyChan); + thirdParty.add(curMsg); + } + } + if (includesUnauthorized) + _unauthMessageEntries = thirdParty; + else + _pseudoAuthorizedMessages = thirdParty; + + _scope = scope; + _knownMessageCount = numMsgs; + _metaVersion = version; + _receiveDate = recvDate; + _entrySize = entrySize; + return true; + } catch (DataFormatException dfe) { + throw new IOException("Invalid number: " + dfe.getMessage()); + } + } +} diff --git a/src/syndie/db/ArchiveDiff.java b/src/syndie/db/ArchiveDiff.java new file mode 100644 index 0000000..b43df66 --- /dev/null +++ b/src/syndie/db/ArchiveDiff.java @@ -0,0 +1,138 @@ +package syndie.db; + +import java.util.*; + +/** + * summarize the differences between the index and the local database + */ +public class ArchiveDiff { + // class fields are being exposed directly contrary to good standards so that + // the archive index and syndicators can simply rework the data. it is + // package scoped though, so the tight coupling isn't too bad + + /** how many new channels the index has that we do not */ + int totalNewChannels; + /** how many new messages the index has that we do not */ + int totalNewMessages; + /** how many new messages they have that we do not */ + int totalNewMessagesOnKnownChannels; + /** hopefully pretty self-explanatory */ + int totalKnownChannelsWithNewMessages; + /** channels that we know whose metadata has been updated remotely */ + int totalUpdatedChannels; + /** if we wanted to only fetch things we did not already have, how much data would we fetch? */ + long fetchNewBytes; + /** if we wanted to only fetch things we did not already have, how many metadata messages would we fetch? contains SyndieURIs*/ + List fetchNewMetadata; + /** if we wanted to only fetch things we did not already have, how many posts would we fetch? contains SyndieURIs */ + List fetchNewPosts; + /** if we wanted to only fetch things we did not already have, how many replies would we fetch? contains SyndieURIs */ + List fetchNewReplies; + /** if we wanted to only fetch posts on channels known locally, how much data would we fetch? contains SyndieURIs */ + long fetchKnownBytes; + /** if we wanted to only fetch posts on channels known locally, how many metadata messages would we fetch? contains SyndieURIs */ + List fetchKnownMetadata; + /** if we wanted to only fetch posts on channels known locally, how many posts would we fetch? contains SyndieURIs */ + List fetchKnownPosts; + /** if we wanted to only fetch posts on channels known locally, how many replies would we fetch? contains SyndieURIs */ + List fetchKnownReplies; + /** if we wanted to only fetch updated metatdata, how much data would we fetch? */ + long fetchMetaBytes; + /** if we wanted to only fetch updated metadata, how many metadata messages would we fetch? contains SyndieURIs */ + List fetchMetaMessages; + /** + * if we wanted to fetch all of the information the archive marks as "new", even if + * we already have it locally (as a crude form of information-theoretic anonymity via + * private information retrieval), how much data would we need to download? + */ + long fetchPIRBytes; + /** + * if we wanted to fetch all of the information the archive marks as "new", even if + * we already have it locally (as a crude form of information-theoretic anonymity via + * private information retrieval), how many metadata messages would we need to download? + */ + List fetchPIRMetadata; + /** + * if we wanted to fetch all of the information the archive marks as "new", even if + * we already have it locally (as a crude form of information-theoretic anonymity via + * private information retrieval), how many posts would we need to download? + */ + List fetchPIRPosts; + /** + * if we wanted to fetch all of the information the archive marks as "new", even if + * we already have it locally (as a crude form of information-theoretic anonymity via + * private information retrieval), how many replies would we need to download? + */ + List fetchPIRReplies; + /** if we wanted to only fetch new unauthorized posts, how much data would we fetch? */ + long fetchNewUnauthorizedBytes; + /** if we wanted to only fetch new unauthorized posts, how many metadata messages would we fetch? */ + List fetchNewUnauthorizedMetadata; + /** if we wanted to only fetch new unauthorized posts, how many posts would we fetch? */ + List fetchNewUnauthorizedPosts; + /** if we wanted to only fetch new unauthorized posts, how many replies would we fetch? */ + List fetchNewUnauthorizedReplies; + + /** what was the max message size used when calculating the diff */ + long maxSizeUsed; + + public ArchiveDiff() { + fetchNewMetadata = new ArrayList(); + fetchNewPosts = new ArrayList(); + fetchNewReplies = new ArrayList(); + fetchKnownMetadata = new ArrayList(); + fetchKnownPosts = new ArrayList(); + fetchKnownReplies = new ArrayList(); + fetchMetaMessages = new ArrayList(); + fetchPIRMetadata = new ArrayList(); + fetchPIRPosts = new ArrayList(); + fetchPIRReplies = new ArrayList(); + fetchNewUnauthorizedMetadata = new ArrayList(); + fetchNewUnauthorizedPosts = new ArrayList(); + fetchNewUnauthorizedReplies = new ArrayList(); + maxSizeUsed = -1; + } + + /** SyndieURI instances of the URIs to fetch if only grabbing ones we don't have */ + public List getFetchNewURIs(boolean includeReplies) { + List rv = new ArrayList(); + rv.addAll(fetchNewMetadata); + rv.addAll(fetchNewPosts); + if (includeReplies) + rv.addAll(fetchNewReplies); + return rv; + } + /** SyndieURI instances of the URIs to fetch if only grabbing ones on channels known locally */ + public List getFetchKnownURIs(boolean includeReplies) { + List rv = new ArrayList(); + rv.addAll(fetchKnownMetadata); + rv.addAll(fetchKnownPosts); + if (includeReplies) + rv.addAll(fetchKnownReplies); + return rv; + } + /** SyndieURI instances of the URIs to fetch if only grabbing updated metadata */ + public List getFetchMetaURIs() { + List rv = new ArrayList(); + rv.addAll(fetchMetaMessages); + return rv; + } + /** SyndieURI instances of the URIs to fetch if only grabbing PIR style */ + public List getFetchPIRURIs() { + List rv = new ArrayList(); + rv.addAll(fetchPIRMetadata); + rv.addAll(fetchPIRPosts); + rv.addAll(fetchPIRReplies); + return rv; + } + /** SyndieURI instances of the URIs to fetch if only grabbing new unauthorized ones */ + public List getFetchNewUnauthorizedURIs(boolean includeReplies) { + List rv = new ArrayList(); + rv.addAll(fetchNewUnauthorizedMetadata); + rv.addAll(fetchNewUnauthorizedPosts); + if (includeReplies) + rv.addAll(fetchNewUnauthorizedReplies); + return rv; + } +} + diff --git a/src/syndie/db/ArchiveIndex.java b/src/syndie/db/ArchiveIndex.java new file mode 100644 index 0000000..1a39620 --- /dev/null +++ b/src/syndie/db/ArchiveIndex.java @@ -0,0 +1,451 @@ +package syndie.db; + +import java.io.*; +import java.util.*; +import net.i2p.data.*; +import syndie.Constants; +import syndie.data.ChannelInfo; +import syndie.data.MessageInfo; +import syndie.data.SyndieURI; + +/** + * + */ +public class ArchiveIndex { + private List _channelEntries; + + /** default max file size to include in the index when filtering */ + static final long DEFAULT_MAX_SIZE = 32*1024; + + private ArchiveIndex() { + _channelEntries = new ArrayList(); + } + + public int getChannelCount() { return _channelEntries.size(); } + public ArchiveChannel getChannel(int index) { return (ArchiveChannel)_channelEntries.get(index); } + private void addChannel(ArchiveChannel channel) { _channelEntries.add(channel); } + + public ArchiveMessage getMessage(SyndieURI uri) { + ArchiveChannel chan = getChannel(uri); + if (chan != null) { + long msgId = uri.getMessageId().longValue(); + for (int j = 0; j < chan.getMessageCount(); j++) { + ArchiveMessage me = chan.getMessage(j); + if (me.getMessageId() == msgId) + return me; + } + } + return null; + } + public ArchiveChannel getChannel(SyndieURI uri) { + for (int i = 0; i < getChannelCount(); i++) { + ArchiveChannel e = getChannel(i); + if (DataHelper.eq(e.getScope(), uri.getScope().getData())) + return e; + } + return null; + } + + /** how new a message has to be to be considered 'new' */ + private static final int AGE_NEW_DAYS = 3; + + public static ArchiveIndex buildIndex(DBClient client, UI ui, File archiveDir, long maxSize) throws IOException { + ArchiveIndex index = new ArchiveIndex(); + File channelDirs[] = archiveDir.listFiles(); + for (int i = 0; i < channelDirs.length; i++) { + if (channelDirs[i].isDirectory()) + buildChannelIndex(client, ui, channelDirs[i], index, maxSize); + } + return index; + } + public static ArchiveIndex buildChannelIndex(DBClient client, UI ui, File channelDir, long maxSize) throws IOException { + ArchiveIndex index = new ArchiveIndex(); + buildChannelIndex(client, ui, channelDir, index, maxSize); + return index; + } + + /** + * rather than automatically including all of the unauthorized messages, + * only include up to 50 'new' messages + */ + private static final int MAX_UNAUTHORIZED_INDEXED = 50; + + private static void buildChannelIndex(DBClient client, UI ui, File channelDir, ArchiveIndex index, long maxSize) throws IOException { + byte chanHash[] = Base64.decode(channelDir.getName()); + long chanId = client.getChannelId(new Hash(chanHash)); + if (chanId < 0) { + ui.errorMessage("Channel " + channelDir.getName() + " is invalid within the archive?"); + return; + } + + ChannelInfo info = client.getChannel(chanId); + if (info == null) { + ui.errorMessage("Channel " + channelDir.getName() + " is in the archive, but not the database?"); + return; + } + + ArchiveChannel chan = new ArchiveChannel(ui); + + List messages = new ArrayList(); + List pseudoAuthMessages = new ArrayList(); + List unauthorizedMessages = new ArrayList(); + + // grab authorized messages + List authorizedIds = client.getMessageIdsAuthorized(info.getChannelHash()); + for (int i = 0; i < authorizedIds.size(); i++) { + ui.debugMessage("Authorized messageIds for " + info.getChannelHash().toBase64() + ": " + authorizedIds); + Long msgId = (Long)authorizedIds.get(i); + MessageInfo msgInfo = client.getMessage(msgId.longValue()); + if (msgInfo != null) { + if ( (msgInfo.getExpiration() > 0) && (msgInfo.getExpiration() < System.currentTimeMillis()) ) + continue; + File msgFile = null; + if (msgInfo.getScopeChannel().equals(info.getChannelHash())) { + msgFile = new File(channelDir, msgInfo.getMessageId() + Constants.FILENAME_SUFFIX); + } else { + File dir = new File(channelDir.getParentFile(), msgInfo.getScopeChannel().toBase64()); + msgFile = new File(dir, msgInfo.getMessageId() + Constants.FILENAME_SUFFIX); + } + if (msgFile.exists()) { + long size = msgFile.length(); + + String name = msgFile.getName(); + long when = msgFile.lastModified(); + when = when - (when % 24*60*60*1000); // ignore the time of day + + if (size > maxSize) + continue; + + ArchiveMessage entry = new ArchiveMessage(); + entry.setMessageId(msgInfo.getMessageId()); + entry.setReceiveDate(when); + entry.setEntrySize(size); + entry.setPrimaryScope(msgInfo.getScopeChannel()); + + boolean isNew = false; + if (when >= System.currentTimeMillis() - AGE_NEW_DAYS*24*60*60*1000) + isNew = true; + + int flags = 0; + if (msgInfo.getWasPrivate()) + flags |= ArchiveMessage.MASK_REPLY; + if (msgInfo.getWasAuthorized()) + flags |= ArchiveMessage.MASK_AUTHORIZED; + if (msgInfo.getWasPassphraseProtected()) + flags |= ArchiveMessage.MASK_PBE; + if (isNew) + flags |= ArchiveMessage.MASK_NEW; + entry.setFlags(flags); + + if (info.getChannelHash().equals(entry.getPrimaryScope())) + messages.add(entry); + else + pseudoAuthMessages.add(entry); + } else { + // ok, known authenticated post, but we don't have the original + // signed message anymore + } + } + } + + // grab unauthorized yet authenticated messages + List authIds = client.getMessageIdsAuthenticated(info.getChannelHash()); + // grab unauthenticated messages + // ?!? why would we want to pass on unauthenticated? no unauthenticated unauthorized + // posts - just create a random identity and authenticate with that if you want to. + //List unauthIds = client.getMessageIdsUnauthenticated(info.getChannelHash()); + //unauthIds.addAll(authIds); + File archiveDir = client.getArchiveDir(); + for (int i = 0; i < authIds.size() && i < MAX_UNAUTHORIZED_INDEXED; i++) { + Long msgId = (Long)authIds.get(i); + MessageInfo msgInfo = client.getMessage(msgId.longValue()); + if (msgInfo != null) { + if ( (msgInfo.getExpiration() > 0) && (msgInfo.getExpiration() < System.currentTimeMillis()) ) + continue; + long scopeChanId = msgInfo.getScopeChannelId(); + ChannelInfo scopeChan = client.getChannel(scopeChanId); + if (scopeChan == null) + continue; + File scopeChanDir = new File(archiveDir, scopeChan.getChannelHash().toBase64()); + if (!scopeChanDir.exists()) + continue; // known in the db, not in the archive + File msgFile = new File(scopeChanDir, msgInfo.getMessageId() + Constants.FILENAME_SUFFIX); + if (msgFile.exists()) { + long size = msgFile.length(); + + String name = msgFile.getName(); + long when = msgFile.lastModified(); + when = when - (when % 24*60*60*1000); // ignore the time of day + + if (size > maxSize) + continue; + + boolean isNew = false; + if (when >= System.currentTimeMillis() - AGE_NEW_DAYS*24*60*60*1000) + isNew = true; + + if (!isNew) { + // unauth only includes new posts + continue; + } + + ArchiveMessage entry = new ArchiveMessage(); + entry.setMessageId(msgInfo.getMessageId()); + entry.setReceiveDate(when); + entry.setEntrySize(size); + entry.setPrimaryScope(scopeChan.getChannelHash()); + + int flags = 0; + if (msgInfo.getWasPrivate()) + flags |= ArchiveMessage.MASK_REPLY; + if (msgInfo.getWasAuthorized()) + flags |= ArchiveMessage.MASK_AUTHORIZED; + if (msgInfo.getWasPassphraseProtected()) + flags |= ArchiveMessage.MASK_PBE; + if (isNew) + flags |= ArchiveMessage.MASK_NEW; + entry.setFlags(flags); + + unauthorizedMessages.add(entry); + } else { + // ok, known unauthenticated post, but we don't have the original + // signed message anymore + } + } + } + + // grab the metadata + File mdFile = new File(channelDir, "meta" + Constants.FILENAME_SUFFIX); + long mdSize = mdFile.length(); + long mdDate = mdFile.lastModified(); + mdDate = mdDate - (mdDate % 24*60*60*1000); // ignore the time of day + chan.setReceiveDate(mdDate); + chan.setEntrySize(mdSize); + + chan.setScope(chanHash); + chan.setVersion(info.getEdition()); + chan.setMessages(messages); + chan.setPseudoAuthorizedMessages(pseudoAuthMessages); + chan.setUnauthorizedMessages(unauthorizedMessages); + + index.addChannel(chan); + return; + } + + public static ArchiveIndex loadIndex(File in, UI ui, boolean unauth) throws IOException { + ArchiveIndex index = new ArchiveIndex(); + FileInputStream fin = null; + try { + fin = new FileInputStream(in); + while (true) { + ArchiveChannel entry = new ArchiveChannel(ui); + boolean ok = entry.read(fin, unauth); + if (ok) { + if (unauth) { + ui.debugMessage("Index contains the unauthorized channel data for " + Base64.encode(entry.getScope())); + for (int i = 0 ; i < entry.getUnauthorizedMessageCount(); i++) { + ArchiveMessage msg = entry.getUnauthorizedMessage(i); + ui.debugMessage(i + ": " + msg.getPrimaryScope().toBase64() + ":" + msg.getMessageId() + "/" + msg.getIsAuthorized() + "/"+msg.getIsNew() + "/" + msg.getIsPasswordProtected() + "/" + msg.getIsReply() + "/" + ((msg.getEntrySize()+1023)/1024) + "KB"); + } + } else { + ui.debugMessage("Index contains the channel data for " + Base64.encode(entry.getScope())); + for (int i = 0 ; i < entry.getMessageCount(); i++) { + ArchiveMessage msg = entry.getMessage(i); + ui.debugMessage(i + ": " + msg.getMessageId() + "/" + msg.getIsAuthorized() + "/"+msg.getIsNew() + "/" + msg.getIsPasswordProtected() + "/" + msg.getIsReply() + "/" + ((msg.getEntrySize()+1023)/1024) + "KB"); + } + ui.debugMessage("Pseudoauthorized messages: " + entry.getPseudoAuthorizedMessageCount()); + for (int i = 0 ; i < entry.getPseudoAuthorizedMessageCount(); i++) { + ArchiveMessage msg = entry.getPseudoAuthorizedMessage(i); + ui.debugMessage(i + ": " + msg.getPrimaryScope().toBase64() +":" + msg.getMessageId() + "/" + msg.getIsAuthorized() + "/"+msg.getIsNew() + "/" + msg.getIsPasswordProtected() + "/" + msg.getIsReply() + "/" + ((msg.getEntrySize()+1023)/1024) + "KB"); + } + } + index.addChannel(entry); + } else { + break; + } + } + } finally { + if (fin != null) fin.close(); + } + return index; + } + + /** + * compare the current index and the locally known messages, filtering out + * banned/ignored/deleted posts/authors/channels, etc. + */ + public ArchiveDiff diff(DBClient client, UI ui, Opts opts) { + long maxSize = opts.getOptLong("maxSize", DEFAULT_MAX_SIZE); + ArchiveDiff rv = new ArchiveDiff(); + List banned = client.getBannedChannels(); + for (int i = 0; i < _channelEntries.size(); i++) { + ArchiveChannel chan = (ArchiveChannel)_channelEntries.get(i); + + if (chan.getEntrySize() > maxSize) { + ui.debugMessage("Indexed channel metadata is too large (" + chan.getEntrySize() + " bytes)"); + continue; + } + + byte scope[] = chan.getScope(); + if (banned.contains(new Hash(scope))) { + ui.debugMessage("Skipping banned channel " + Base64.encode(scope)); + continue; + } + + if (chan.getUnauthorizedMessageCount() > 0) { + diffUnauth(client, ui, opts, chan, banned, rv); + continue; + } + + long channelId = client.getChannelId(new Hash(scope)); + ChannelInfo chanInfo = null; + if (channelId >= 0) + chanInfo = client.getChannel(channelId); + + SyndieURI chanURI = SyndieURI.createScope(new Hash(scope)); + + if (chanInfo == null) { + rv.totalNewChannels++; + } else if (chan.getVersion() > chanInfo.getEdition()) { + rv.totalUpdatedChannels++; + rv.fetchKnownBytes += chan.getEntrySize(); + rv.fetchKnownMetadata.add(chanURI); + } + + if ( (chanInfo == null) || (chan.getVersion() > chanInfo.getEdition()) ) { + rv.fetchNewMetadata.add(chanURI); + rv.fetchNewBytes += chan.getEntrySize(); + rv.fetchMetaMessages.add(chanURI); + rv.fetchMetaBytes += chan.getEntrySize(); + } + + boolean newMsgFound = false; + boolean newPIRMsgFound = false; + List chanMsgs = new ArrayList(); + for (int j = 0; j < chan.getMessageCount(); j++) + chanMsgs.add(chan.getMessage(j)); + for (int j = 0; j < chan.getPseudoAuthorizedMessageCount(); j++) + chanMsgs.add(chan.getPseudoAuthorizedMessage(j)); + for (int j = 0; j < chanMsgs.size(); j++) { + ArchiveMessage msg = (ArchiveMessage)chanMsgs.get(j); + SyndieURI msgURI = SyndieURI.createMessage(msg.getPrimaryScope(), msg.getMessageId()); + if (!banned.contains(msg.getPrimaryScope())) { + long scopeId = client.getChannelId(msg.getPrimaryScope()); + long msgId = client.getMessageId(scopeId, msg.getMessageId()); + if ( (msgId < 0) && (msg.getEntrySize() <= maxSize) ) { + ui.debugMessage("new message: " + msg.getPrimaryScope().toBase64() + ":" + msg.getMessageId() + " (" + scopeId + "/" + msgId + ")"); + rv.fetchNewBytes += msg.getEntrySize(); + if (msg.getIsReply()) + rv.fetchNewReplies.add(msgURI); + else + rv.fetchNewPosts.add(msgURI); + if (chanInfo != null) { + rv.totalNewMessagesOnKnownChannels++; + rv.fetchKnownBytes += msg.getEntrySize(); + if (msg.getIsReply()) + rv.fetchKnownReplies.add(msgURI); + else + rv.fetchKnownPosts.add(msgURI); + } else { + rv.totalNewMessages++; + } + newMsgFound = true; + } + } + // even if it is banned, PIR requires it to be fetched + if (msg.getIsNew()) { + newPIRMsgFound = true; + rv.fetchPIRBytes += msg.getEntrySize(); + if (msg.getIsReply()) + rv.fetchPIRReplies.add(msgURI); + else + rv.fetchPIRPosts.add(msgURI); + } + } + if (newMsgFound && (chanInfo != null)) + rv.totalKnownChannelsWithNewMessages++; + if (newPIRMsgFound) { + rv.fetchPIRMetadata.add(chanURI); + rv.fetchPIRBytes += chan.getEntrySize(); + } + } + rv.maxSizeUsed = maxSize; + return rv; + } + + private void diffUnauth(DBClient client, UI ui, Opts opts, ArchiveChannel chan, List banned, ArchiveDiff rv) { + //todo: the unauth diff logic is off, populating Diff in an odd way, since an + // index containing diffs will contain ONLY diffs + // ?? is it still? + for (int i = 0 ; i < chan.getUnauthorizedMessageCount(); i++) { + ArchiveMessage msg = chan.getUnauthorizedMessage(i); + Hash scope = msg.getPrimaryScope(); + if (banned.contains(scope)) { + // banned author, but not banned target channel + continue; + } + long localChanId = client.getChannelId(scope); + if (localChanId >= 0) { + long localInternalId = client.getMessageId(localChanId, msg.getMessageId()); + if (localInternalId >= 0) { + // the unauthorized post is already known + continue; + } else { + // unauthorized post is not known + } + } else { + // unauthorized post is by an unknown author, so try + // to include the author's metadata in the to-fetch list + SyndieURI scopeMeta = SyndieURI.createScope(scope); + if (!rv.fetchNewUnauthorizedMetadata.contains(scopeMeta)) { + rv.fetchNewUnauthorizedMetadata.add(scopeMeta); + for (int j = 0; j < _channelEntries.size(); j++) { + ArchiveChannel curChan = (ArchiveChannel)_channelEntries.get(j); + if (curChan.getScope().equals(scope)) { + rv.fetchNewUnauthorizedBytes += curChan.getEntrySize(); + break; + } + } + } + } + + Hash targetScope = new Hash(chan.getScope()); + long localTargetChanId = client.getChannelId(targetScope); + if (localTargetChanId < 0) { + // unauthorized post is targetting an unknown channel, so try + // to include the target channel's metadata in the to-fetch list + if (!rv.fetchNewUnauthorizedMetadata.contains(targetScope)) { + rv.fetchNewUnauthorizedMetadata.add(SyndieURI.createScope(targetScope)); + for (int j = 0; j < _channelEntries.size(); j++) { + ArchiveChannel curChan = (ArchiveChannel)_channelEntries.get(j); + if (DataHelper.eq(curChan.getScope(),chan.getScope())) { + rv.fetchNewUnauthorizedBytes += curChan.getEntrySize(); + break; + } + } + } + } + + rv.fetchNewUnauthorizedBytes += msg.getEntrySize(); + SyndieURI uri = SyndieURI.createMessage(scope, msg.getMessageId()); + if (msg.getIsReply()) + rv.fetchNewUnauthorizedReplies.add(uri); + else + rv.fetchNewUnauthorizedPosts.add(uri); + } + } + + public static void main(String args[]) { + String path = "/home/jrandom/.syndie/archive/index-all.dat"; + if (args.length >= 1) + path = args[0]; + boolean unauth = false; + if ( (args.length >= 2) && ("unauth".equalsIgnoreCase(args[1])) ) + unauth = true; + try { + ArchiveIndex index = ArchiveIndex.loadIndex(new File(path), new TextUI(true), unauth); + } catch (IOException ex) { + ex.printStackTrace(); + } + } +} diff --git a/src/syndie/db/ArchiveMessage.java b/src/syndie/db/ArchiveMessage.java new file mode 100644 index 0000000..15e39e8 --- /dev/null +++ b/src/syndie/db/ArchiveMessage.java @@ -0,0 +1,41 @@ +package syndie.db; + +import net.i2p.data.*; + +/** +* describes a message in an archive index +*/ +public class ArchiveMessage { + private long _messageId; + private long _recvDate; + private long _entrySize; + private int _flags; + private boolean _isNew; + private Hash _primaryScope; + + /** is the post authorized */ + static final int MASK_AUTHORIZED = 1 << 7; + /** is the post a privately encrypted reply */ + static final int MASK_REPLY = 1 << 6; + /** is the post encrypted with password based encryption */ + static final int MASK_PBE = 1 << 5; + /** the archive considers the post 'new' */ + static final int MASK_NEW = 1 << 4; + + public long getMessageId() { return _messageId; } + public long getReceiveDate() { return _recvDate; } + public long getEntrySize() { return _entrySize; } + public boolean getIsNew() { return ((_flags & MASK_NEW) != 0); } + public boolean getIsAuthorized() { return ((_flags & MASK_AUTHORIZED) != 0); } + public boolean getIsReply() { return ((_flags & MASK_REPLY) != 0); } + public boolean getIsPasswordProtected() { return ((_flags & MASK_PBE) != 0); } + public int getFlags() { return _flags; } + /** channel that 'owns' the message (not necessary for authorized posts) */ + public Hash getPrimaryScope() { return _primaryScope; } + + void setMessageId(long id) { _messageId = id; } + void setReceiveDate(long when) { _recvDate = when; } + void setEntrySize(long size) { _entrySize = size; } + void setFlags(int flags) { _flags = flags; } + void setPrimaryScope(Hash channel) { _primaryScope = channel; } +} diff --git a/src/syndie/db/CLI.java b/src/syndie/db/CLI.java new file mode 100644 index 0000000..a16e022 --- /dev/null +++ b/src/syndie/db/CLI.java @@ -0,0 +1,82 @@ +package syndie.db; + +import java.io.IOException; +import java.util.*; +import net.i2p.data.Base64; +import net.i2p.data.DataHelper; + +/** + * + */ +public class CLI { + private static final String PREFIX = CLI.class.getName().substring(0, CLI.class.getName().lastIndexOf(".")); + public static interface Command { + public DBClient runCommand(Opts opts, UI ui, DBClient client); + } + private static final Object _commands[][] = new Object[][] { + new Object[] { "import", Importer.class }, + new Object[] { "register", LoginManager.class }, +// new Object[] { "login", LoginManager.class }, + new Object[] { "changen", ChanGen.class }, + new Object[] { "chanlist", ChanList.class }, + new Object[] { "keyimport", KeyImport.class }, + new Object[] { "keygen", KeyGen.class }, + new Object[] { "keylist", KeyList.class }, + new Object[] { "messagegen", MessageGen.class }, + new Object[] { "messageextract", MessageExtract.class }, + new Object[] { "viewmetadata", ViewMetadata.class }, + new Object[] { "messagelist", MessageList.class }, + new Object[] { "viewmessage", ViewMessage.class } + }; + + public static void main(String args[]) { + //args = new String[] { "Importer" }; + if ( (args == null) || (args.length <= 0) ) { + usage(); + return; + } + + Command cmd = getCommand(args[0]); + if (cmd != null) { + DBClient client = null; + try { + String params[] = new String[args.length-1]; + System.arraycopy(args, 1, params, 0, params.length); + Opts opts = new Opts(args[0], params); + client = cmd.runCommand(opts, new TextUI(opts.getOptBoolean("debug", false)), null); + } catch (Exception e) { + e.printStackTrace(); + } finally { + if (client != null) + client.close(); + } + } else { + usage(); + } + } + public static Command getCommand(String name) { + Class cls = null; + for (int i = 0; i < _commands.length; i++) { + if (name.equalsIgnoreCase(_commands[i][0].toString())) { + cls = (Class)_commands[i][1]; + break; + } + } + if (cls != null) { + try { + return (Command)cls.newInstance(); + } catch (Exception e) { + return null; + } + } else { + return null; + } + } + private static final void usage() { + System.err.println("Usage: $command [$args]*"); + System.err.print("Known commands: "); + for (int i = 0; i < _commands.length; i++) + System.err.print(_commands[i][0].toString() + " "); + System.err.println(); + } +} \ No newline at end of file diff --git a/src/syndie/db/ChanGen.java b/src/syndie/db/ChanGen.java new file mode 100644 index 0000000..5c3447b --- /dev/null +++ b/src/syndie/db/ChanGen.java @@ -0,0 +1,390 @@ +package syndie.db; + +import gnu.crypto.hash.Sha256Standalone; +import java.io.*; +import java.util.*; +import java.util.zip.ZipEntry; +import java.util.zip.ZipOutputStream; +import net.i2p.I2PAppContext; +import net.i2p.data.*; +import syndie.Constants; +import syndie.data.ChannelInfo; +import syndie.data.EnclosureBody; +import syndie.data.NymKey; +import syndie.data.ReferenceNode; + +/** + *changen + * [--channelId $internalId] // if set, try to update the given channel rather than create a new one (only if authorized) + * --name $name + * [--description $desc] + * [--avatar $filename] // location of 32x32 PNG formatted avatar + * [--edition $num] // edition to publish, or an automatically chosen value if not specified + * [--publicPosting $boolean] // can anyone create new threads? + * [--publicReplies $boolean] // can anyone reply to posts? + * [--pubTag $tag]* + * [--privTag $tag]* + * [--postKey $base64PubKey]* // who is allowed to post to the channel + * [--manageKey $base64PubKey]* // who is allowed to manage the channel + * [--refs $channelRefGroupFile] // ([\t]*$name\t$uri\t$refType\t$description\n)* lines + * [--pubArchive $archive]* + * [--privArchive $archive]* + * [--encryptContent $boolean] // don't publicize the key encrypting the metadata, and include a session key in the encrypted metadata to read posts with + * [--bodyPassphrase $passphrase --bodyPassphrasePrompt $prompt] + * // derive the body key from the passphrase, and include a publicly + * // visible hint to prompt it + * --metaOut $metadataFile // signed metadata file, ready to import + * --keyManageOut $keyFile // signing private key to manage + * --keyReplyOut $keyFile // decrypt private key to read replies + * [--keyEncryptPostOut $keyFile] // key used to encrypt posts (may be hidden if --encryptContent, otherwise anyone can get it too) + * [--keyEncryptMetaOut $keyFile] // key used to encrypt metadata (if --encryptContent) + */ +public class ChanGen extends CommandImpl { + private I2PAppContext _ctx; + public ChanGen(I2PAppContext ctx) { _ctx = ctx; } + public ChanGen() { this(I2PAppContext.getGlobalContext()); } + + public DBClient runCommand(Opts args, UI ui, DBClient client) { + List missing = args.requireOpts(new String[] { "name", /*"metaOut",*/ "keyManageOut", "keyReplyOut" }); + if (missing.size() > 0) { + ui.errorMessage("Invalid options, missing " + missing); + ui.commandComplete(-1, null); + return client; + } + + if (args.getOptBoolean("encryptContent", false) && (args.getOptValue("keyEncryptPostOut") == null) ) { + ui.errorMessage("When posts should be encrypted, you probably want to generate a key they should use, 'eh?"); + ui.errorMessage("(so either use --keyEncryptPostOut $outFile or use --encryptContent false)"); + ui.commandComplete(-1, null); + return client; + } + + Object repKeys[] = _ctx.keyGenerator().generatePKIKeypair(); + PublicKey replyPublic = (PublicKey)repKeys[0]; + PrivateKey replyPrivate = (PrivateKey)repKeys[1]; + Object identKeys[] = _ctx.keyGenerator().generateSigningKeypair(); + SigningPublicKey identPublic = (SigningPublicKey)identKeys[0]; + SigningPrivateKey identPrivate = (SigningPrivateKey)identKeys[1]; + SessionKey bodyKey = _ctx.keyGenerator().generateSessionKey(); + SessionKey readKey = _ctx.keyGenerator().generateSessionKey(); // not always used + + String out = args.getOptValue("metaOut"); + if (out == null) { + File chanDir = new File(client.getOutboundDir(), identPublic.calculateHash().toBase64()); + chanDir.mkdirs(); + out = new File(chanDir, "meta" + Constants.FILENAME_SUFFIX).getPath(); + } + + long existingChannelId = args.getOptLong("channelId", -1); + if (existingChannelId >= 0) { + ChannelInfo existing = client.getChannel(existingChannelId); + if (existing == null) { + ui.errorMessage("Cannot update the channel " + existingChannelId + ", as it is not known?"); + ui.commandComplete(-1, null); + return client; + } + PublicKey enc = existing.getEncryptKey(); + PrivateKey encPriv = null; + List keys = client.getNymKeys(client.getLoggedInNymId(), client.getPass(), existing.getChannelHash(), Constants.KEY_FUNCTION_REPLY); + if ( (keys != null) && (keys.size() >= 0) ) { + for (int i = 0; i < keys.size(); i++) { + NymKey k = (NymKey)keys.get(i); + PrivateKey priv = new PrivateKey(k.getData()); + PublicKey curPub = client.ctx().keyGenerator().getPublicKey(priv); + if (curPub.equals(enc)) { + encPriv = priv; + break; + } + } + } + SigningPublicKey ident = existing.getIdentKey(); + SigningPrivateKey identPriv = null; + keys = client.getNymKeys(client.getLoggedInNymId(), client.getPass(), existing.getChannelHash(), Constants.KEY_FUNCTION_MANAGE); + if ( (keys != null) && (keys.size() >= 0) ) { + for (int i = 0; i < keys.size(); i++) { + NymKey k = (NymKey)keys.get(i); + SigningPrivateKey priv = new SigningPrivateKey(k.getData()); + SigningPublicKey curPub = client.ctx().keyGenerator().getSigningPublicKey(priv); + if (curPub.equals(ident)) { + identPriv = priv; + break; + } + } + } + + if (identPriv == null) { + ui.errorMessage("Not authorized to update the channel " + ident.calculateHash().toBase64()); + ui.commandComplete(-1, null); + return client; + } + + identPublic = ident; + identPrivate = identPriv; + replyPublic = enc; + replyPrivate = encPriv; // may be null, in case we are allowed to manage but not receive replies + + keys = client.getNymKeys(client.getLoggedInNymId(), client.getPass(), existing.getChannelHash(), Constants.KEY_FUNCTION_READ); + if ( (keys != null) && (keys.size() > 0) ) { + int idx = client.ctx().random().nextInt(keys.size()); + NymKey k = (NymKey)keys.get(idx); + bodyKey = new SessionKey(k.getData()); + readKey = new SessionKey(k.getData()); + } else { + // use the channel's default read keys + Set readKeys = existing.getReadKeys(); + int idx = client.ctx().random().nextInt(readKeys.size()); + SessionKey cur = null; + Iterator iter = readKeys.iterator(); + for (int i = 0; i < idx; i++) + iter.next(); // ignore + cur = (SessionKey)iter.next(); + bodyKey = cur; + readKey = cur; + } + + } + + if (true) { + SigningPublicKey testPub = client.ctx().keyGenerator().getSigningPublicKey(identPrivate); + if (identPublic.equals(testPub)) { + // ok, gravity works + } else { + ui.errorMessage("Signing private key b0rked: " + identPrivate.toBase64()); + ui.errorMessage("It generates a public key: " + testPub.toBase64()); + ui.errorMessage("that does not match the orig pub key: " + identPublic.toBase64()); + ui.commandComplete(-1, null); + return client; + } + } + + Map pubHeaders = generatePublicHeaders(ui, args, replyPublic, identPublic, bodyKey, readKey); + Map privHeaders = generatePrivateHeaders(ui, args, replyPublic, identPublic, bodyKey, readKey); + + String refStr = null; + String filename = args.getOptValue("refs"); + if (filename != null) { + FileInputStream fin = null; + File f = new File(filename); + if (f.exists()) { + try { + fin = new FileInputStream(f); + List refNodes = ReferenceNode.buildTree(fin); + refStr = ReferenceNode.walk(refNodes); + } catch (IOException ioe) { + ui.errorMessage("Error pulling in the refs", ioe); + ui.commandComplete(-1, null); + return client; + } finally { + if (fin != null) try { fin.close(); } catch (IOException ioe) {} + } + } + } + + byte avatar[] = read(ui, args.getOptValue("avatar"), Constants.MAX_AVATAR_SIZE); + + boolean ok = writeMeta(ui, out, refStr, identPublic, identPrivate, bodyKey, pubHeaders, privHeaders, avatar); + if (ok) + ok = writeKey(ui, args.getOptValue("keyManageOut"), identPrivate, identPublic.calculateHash()); + if (ok && (replyPrivate != null)) + ok = writeKey(ui, args.getOptValue("keyReplyOut"), replyPrivate, identPublic.calculateHash()); + if (ok && (args.getOptBoolean("encryptContent", false))) + ok = writeKey(ui, args.getOptValue("keyEncryptMetaOut"), bodyKey, identPublic.calculateHash()) && + writeKey(ui, args.getOptValue("keyEncryptPostOut"), readKey, identPublic.calculateHash()); + if (ok) + ui.commandComplete(0, null); + else + ui.commandComplete(-1, null); + + return client; + } + + private Map generatePublicHeaders(UI ui, Opts args, PublicKey replyPublic, SigningPublicKey identPublic, SessionKey bodyKey, SessionKey readKey) { + Map rv = new HashMap(); + + rv.put(Constants.MSG_HEADER_TYPE, Constants.MSG_TYPE_META); + + // tags + List tags = args.getOptValues("pubTag"); + if ( (tags != null) && (tags.size() > 0) ) { + StringBuffer buf = new StringBuffer(); + for (int i = 0; i < tags.size(); i++) + buf.append(strip((String)tags.get(i))).append('\t'); + rv.put(Constants.MSG_META_HEADER_TAGS, buf.toString()); + } + + // ident + rv.put(Constants.MSG_META_HEADER_IDENTITY, identPublic.toBase64()); + // reply + rv.put(Constants.MSG_META_HEADER_ENCRYPTKEY, replyPublic.toBase64()); + // edition, defaulting to 0 (should this instead default to trunc(now(), yyyy/mm)?) + rv.put(Constants.MSG_META_HEADER_EDITION, Long.toString(args.getOptLong("edition", 0))); + if ( (args.getOptValue("bodyPassphrase") != null) && (args.getOptValue("bodyPassphrasePrompt") != null) ) { + String passphrase = strip(args.getOptValue("bodyPassphrase")); + byte salt[] = new byte[32]; + _ctx.random().nextBytes(salt); + SessionKey pbeKey = _ctx.keyGenerator().generateSessionKey(salt, DataHelper.getUTF8(passphrase)); + bodyKey.setData(pbeKey.getData()); + String prompt = strip(args.getOptValue("bodyPassphrasePrompt")); + rv.put(Constants.MSG_HEADER_PBE_PROMPT, prompt); + rv.put(Constants.MSG_HEADER_PBE_PROMPT_SALT, Base64.encode(salt)); + } else if (!args.getOptBoolean("encryptContent", false)) { + // if we are NOT trying to privately encrypt the content, then publicize the bodyKey in the public + // headers (so anyone can open the zip content and read the private headers/refs/avatar/etc) + //rv.put(Constants.MSG_META_HEADER_POST_KEYS, readKey.toBase64()); // keep in the private headers + rv.put(Constants.MSG_HEADER_BODYKEY, bodyKey.toBase64()); + } + // can any authenticated (yet not necessarily authorized) post go through? + if (args.getOptBoolean("publicPosting", false)) + rv.put(Constants.MSG_META_HEADER_PUBLICPOSTING, "true"); + // can any authenticated (yet not necessarily authorized) reply to an existing post go through? + if (args.getOptBoolean("publicReplies", false)) + rv.put(Constants.MSG_META_HEADER_PUBLICREPLY, "true"); + // what keys can authorize posts (in addition to the channel ident key, of course) + List auth = args.getOptValues("postKey"); + if ( (auth != null) && (auth.size() > 0) ) { + StringBuffer buf = new StringBuffer(); + for (int i = 0; i < auth.size(); i++) + buf.append(strip((String)auth.get(i))).append('\t'); + rv.put(Constants.MSG_META_HEADER_POST_KEYS, buf.toString()); + } + // what keys can create new metadata messages (in addition to the channel ident key, of course) + List manage = args.getOptValues("manageKey"); + if ( (manage != null) && (manage.size() > 0) ) { + StringBuffer buf = new StringBuffer(); + for (int i = 0; i < manage.size(); i++) + buf.append(strip((String)manage.get(i))).append('\t'); + rv.put(Constants.MSG_META_HEADER_MANAGER_KEYS, buf.toString()); + } + // publicly visible archives of this channel + List archives = args.getOptValues("pubArchive"); + if ( (archives != null) && (archives.size() > 0) ) { + StringBuffer buf = new StringBuffer(); + for (int i = 0; i < archives.size(); i++) + buf.append(strip((String)archives.get(i))).append('\t'); + rv.put(Constants.MSG_META_HEADER_ARCHIVES, buf.toString()); + } + + ui.debugMessage("public headers: " + rv); + return rv; + } + private Map generatePrivateHeaders(UI ui, Opts args, PublicKey replyPublic, SigningPublicKey identPublic, SessionKey bodyKey, SessionKey readKey) { + Map rv = new HashMap(); + + rv.put(Constants.MSG_META_HEADER_READKEYS, readKey.toBase64()); + + // tags + List tags = args.getOptValues("privTag"); + if ( (tags != null) && (tags.size() > 0) ) { + StringBuffer buf = new StringBuffer(); + for (int i = 0; i < tags.size(); i++) + buf.append(strip((String)tags.get(i))).append('\t'); + rv.put(Constants.MSG_META_HEADER_TAGS, buf.toString()); + } + + // name + String name = args.getOptValue("name"); + if (name != null) + rv.put(Constants.MSG_META_HEADER_NAME, strip(name)); + // description + String desc = args.getOptValue("description"); + if (desc != null) + rv.put(Constants.MSG_META_HEADER_DESCRIPTION, strip(desc)); + + // private archives of this channel + List archives = args.getOptValues("privArchive"); + if ( (archives != null) && (archives.size() > 0) ) { + StringBuffer buf = new StringBuffer(); + for (int i = 0; i < archives.size(); i++) + buf.append(strip((String)archives.get(i))).append('\t'); + rv.put(Constants.MSG_META_HEADER_ARCHIVES, buf.toString()); + } + + ui.debugMessage("private headers: " + rv); + return rv; + } + + private boolean writeMeta(UI ui, String metaOut, String refStr, SigningPublicKey identPublic, SigningPrivateKey identPrivate, SessionKey bodyKey, Map pubHeaders, Map privHeaders, byte avatar[]) { + FileOutputStream fos = null; + try { + byte encBody[] = encryptBody(_ctx, writeRawBody(refStr, privHeaders, avatar), bodyKey); + fos = new FileOutputStream(metaOut); + Sha256Standalone hash = new Sha256Standalone(); + DataHelper.write(fos, DataHelper.getUTF8(Constants.TYPE_CURRENT+"\n"), hash); + TreeSet ordered = new TreeSet(pubHeaders.keySet()); + for (Iterator iter = ordered.iterator(); iter.hasNext(); ) { + String key = (String)iter.next(); + String val = (String)pubHeaders.get(key); + DataHelper.write(fos, DataHelper.getUTF8(key + '=' + val + '\n'), hash); + } + DataHelper.write(fos, DataHelper.getUTF8("\nSize=" + encBody.length + "\n"), hash); + DataHelper.write(fos, encBody, hash); + + byte authorizationHash[] = ((Sha256Standalone)hash.clone()).digest(); // digest() reset()s + Signature authorizationSig = _ctx.dsa().sign(new Hash(authorizationHash), identPrivate); + ui.debugMessage("Authorization hash: " + Base64.encode(authorizationHash) + " sig: " + authorizationSig.toBase64()); + DataHelper.write(fos, DataHelper.getUTF8("AuthorizationSig=" + authorizationSig.toBase64() + "\n"), hash); + + byte authenticationHash[] = hash.digest(); + Signature authenticationSig = _ctx.dsa().sign(new Hash(authenticationHash), identPrivate); + ui.debugMessage("Authentication hash: " + Base64.encode(authenticationHash) + " sig: " + authenticationSig.toBase64()); + DataHelper.write(fos, DataHelper.getUTF8("AuthenticationSig=" + authenticationSig.toBase64() + "\n"), hash); + + fos.close(); + fos = null; + return true; + } catch (IOException ioe) { + ui.errorMessage("Error writing the meta", ioe); + ui.commandComplete(-1, null); + return false; + } finally { + if (fos != null) try { fos.close(); } catch (IOException ioe) {} + } + } + + private byte[] writeRawBody(String refStr, Map privHeaders, byte avatar[]) throws IOException { + ByteArrayOutputStream baos = new ByteArrayOutputStream(4*1024); + ZipOutputStream zos = new ZipOutputStream(baos); + if ( (privHeaders != null) && (privHeaders.size() > 0) ) { + ZipEntry entry = new ZipEntry(EnclosureBody.ENTRY_HEADERS); + entry.setTime(0); + zos.putNextEntry(entry); + write(privHeaders, zos); + zos.closeEntry(); + } + if ( (avatar != null) && (avatar.length > 0) ) { + ZipEntry entry = new ZipEntry(EnclosureBody.ENTRY_AVATAR); + entry.setTime(0); + entry.setSize(avatar.length); + zos.putNextEntry(entry); + zos.write(avatar); + zos.closeEntry(); + } + if (refStr != null) { + ZipEntry entry = new ZipEntry(EnclosureBody.ENTRY_REFERENCES); + entry.setTime(0); + byte ref[] = DataHelper.getUTF8(refStr); + entry.setSize(ref.length); + zos.putNextEntry(entry); + zos.write(ref); + zos.closeEntry(); + } + zos.close(); + + byte raw[] = baos.toByteArray(); + return raw; + } + + public static void main(String args[]) { + try { + CLI.main(new String[] { "changen", + "--name", "my name", + "--description", "this is my channel", + "--privTag", "tag1", + "--privTag", "tag2", + "--privTag", "tag3", + "--metaOut", "/tmp/metaOut", + "--keyManageOut", "/tmp/manageOut", + "--keyReplyOut", "/tmp/replyOut"}); + } catch (Exception e) { e.printStackTrace(); } + } +} diff --git a/src/syndie/db/ChanList.java b/src/syndie/db/ChanList.java new file mode 100644 index 0000000..7c735ee --- /dev/null +++ b/src/syndie/db/ChanList.java @@ -0,0 +1,57 @@ +package syndie.db; + +import java.io.File; +import java.sql.SQLException; +import java.util.*; +import net.i2p.I2PAppContext; +import net.i2p.data.Hash; + +/** + *CLI chanlist + * --db $url + * --login $login + * --pass $pass + */ +public class ChanList extends CommandImpl { + ChanList() {} + public DBClient runCommand(Opts args, UI ui, DBClient client) { + if ( (client == null) || (!client.isLoggedIn()) ) { + List missing = args.requireOpts(new String[] { "db" }); + if (missing.size() > 0) { + ui.errorMessage("Invalid options, missing " + missing); + ui.commandComplete(-1, null); + return client; + } + } + + try { + if (args.dbOptsSpecified()) { + if (client == null) + client = new DBClient(I2PAppContext.getGlobalContext(), new File(TextEngine.getRootPath())); + else + client.close(); + client.connect(args.getOptValue("db")); + } + Map ids = client.getChannelIds(); + for (Iterator iter = ids.keySet().iterator(); iter.hasNext(); ) { + Long id = (Long)iter.next(); + Hash chan = (Hash)ids.get(id); + ui.statusMessage("Channel " + id + ": " + chan.toBase64()); + } + ui.commandComplete(0, null); + } catch (SQLException se) { + ui.errorMessage("Invalid database URL", se); + ui.commandComplete(-1, null); + } + return client; + } + + public static void main(String args[]) { + try { + CLI.main(new String[] { "chanlist", + "--db", "jdbc:hsqldb:file:/tmp/cli", + "--login", "j", + "--pass", "j" }); + } catch (Exception e) { e.printStackTrace(); } + } +} diff --git a/src/syndie/db/CommandImpl.java b/src/syndie/db/CommandImpl.java new file mode 100644 index 0000000..ff083e9 --- /dev/null +++ b/src/syndie/db/CommandImpl.java @@ -0,0 +1,256 @@ +package syndie.db; + +import gnu.crypto.hash.Sha256Standalone; +import java.io.*; +import java.util.*; +import net.i2p.I2PAppContext; +import net.i2p.util.Log; +import syndie.Constants; +import net.i2p.data.*; +import syndie.data.ReferenceNode; + +public abstract class CommandImpl implements CLI.Command { + boolean writeKey(UI ui, String filename, PrivateKey key, Hash scope) { + return writeKey(ui, filename, Constants.KEY_FUNCTION_REPLY, scope, key.toBase64()); + } + boolean writeKey(UI ui, String filename, SigningPrivateKey key, Hash scope) { + return writeKey(ui, filename, Constants.KEY_FUNCTION_MANAGE, scope, key.toBase64()); + } + boolean writeKey(UI ui, String filename, SessionKey key, Hash scope) { + return writeKey(ui, filename, Constants.KEY_FUNCTION_READ, scope, key.toBase64()); + } + boolean writeKey(UI ui, String filename, String type, Hash scope, String data) { + if (filename == null) { + ui.errorMessage("Filename is null for writing?"); + return false; + } + FileOutputStream fos = null; + try { + fos = new FileOutputStream(filename); + fos.write(DataHelper.getUTF8("keytype: " + type + "\n")); + if (scope != null) + fos.write(DataHelper.getUTF8("scope: " + scope.toBase64() + "\n")); + fos.write(DataHelper.getUTF8("raw: " + data + "\n")); + fos.close(); + fos = null; + return true; + } catch (IOException ioe) { + ui.errorMessage("Error writing the key", ioe); + return false; + } finally { + if (fos != null) try { fos.close(); } catch (IOException ioe) {} + } + } + + byte[] read(UI ui, String filename, int maxSize) { + if (filename == null) return null; + FileInputStream fis = null; + try { + File f = new File(filename); + if (!f.exists()) + return null; + if (f.length() > maxSize) + return null; + fis = new FileInputStream(f); + byte data[] = new byte[(int)f.length()]; + if (data.length != DataHelper.read(fis, data)) + return null; + fis.close(); + fis = null; + return data; + } catch (IOException ioe) { + ui.debugMessage("Error reading the file", ioe); + return null; + } finally { + if (fis != null) try { fis.close(); } catch (IOException ioe) {} + } + } + + String readRefs(UI ui, String filename) { + FileInputStream fin = null; + File f = new File(filename); + if (f.exists()) { + ui.debugMessage("References file exists: " + f.getPath()); + try { + fin = new FileInputStream(f); + List refNodes = ReferenceNode.buildTree(fin); + ui.debugMessage("Reference nodes: " + refNodes.size()); + return ReferenceNode.walk(refNodes); + } catch (IOException ioe) { + ui.errorMessage("Error pulling in the refs", ioe); + return null; + } finally { + if (fin != null) try { fin.close(); } catch (IOException ioe) {} + } + } else { + ui.debugMessage("References file does not exist: " + f.getPath()); + return null; + } + } + + void write(Map headers, OutputStream out) throws IOException { + TreeSet ordered = new TreeSet(headers.keySet()); + for (Iterator iter = ordered.iterator(); iter.hasNext(); ) { + String key = (String)iter.next(); + String val = (String)headers.get(key); + out.write(DataHelper.getUTF8(key + '=' + val + '\n')); + } + } + + /** + * symmetrically encrypt the raw data to the given key by prepending an + * IV followed by the AES/256/CBC encrypted raw data + */ + byte[] encryptBody(I2PAppContext ctx, byte raw[], SessionKey bodyKey) { + byte iv[] = new byte[16]; + byte hmac[] = new byte[Hash.HASH_LENGTH]; + int pad = ctx.random().nextInt(256); + // IV + AES-CBC(rand(nonzero) padding + 0 + internalSize + totalSize + data + rand, IV, bodyKey)+HMAC(bodySection, H(bodyKey+IV)) + int internalSize = pad + 1 + 4 + 4 + raw.length; + int remainder = 16 - (internalSize % 16); + internalSize += remainder; + + byte prep[] = new byte[internalSize]; + int off = 0; + while (off < pad) { + byte b = (byte)(0xFF & ctx.random().nextInt()); + if (b != 0) { + prep[off] = b; + off++; + } + } + prep[off] = 0; + off++; + DataHelper.toLong(prep, off, 4, raw.length); + off += 4; + DataHelper.toLong(prep, off, 4, prep.length+hmac.length); + off += 4; + System.arraycopy(raw, 0, prep, off, raw.length); + off += raw.length; + int tail = (prep.length-off); + while (off < prep.length) { + byte b = (byte)(0xFF & ctx.random().nextInt()); + prep[off] = b; + off++; + } + + // ok, prepared. now lets encrypt + ctx.random().nextBytes(iv); + byte rv[] = new byte[iv.length+prep.length+hmac.length]; + System.arraycopy(iv, 0, rv, 0, iv.length); + ctx.aes().encrypt(prep, 0, rv, 16, bodyKey, rv, 0, prep.length); + + // append HMAC(bodySection, H(bodyKey+IV)) + byte hmacPreKey[] = new byte[SessionKey.KEYSIZE_BYTES+iv.length]; + System.arraycopy(bodyKey.getData(), 0, hmacPreKey, 0, SessionKey.KEYSIZE_BYTES); + System.arraycopy(iv, 0, hmacPreKey, SessionKey.KEYSIZE_BYTES, iv.length); + byte hmacKey[] = ctx.sha().calculateHash(hmacPreKey).getData(); + ctx.hmac256().calculate(new SessionKey(hmacKey), rv, 16, prep.length, hmac, 0); + System.arraycopy(hmac, 0, rv, iv.length+prep.length, hmac.length); + + if (true) { + Log log = ctx.logManager().getLog(getClass()); + Sha256Standalone dbg = new Sha256Standalone(); + dbg.update(rv); + byte h[] = dbg.digest(); + log.debug("Encrypted body hashes to " + Base64.encode(h)); + log.debug("key used: " + Base64.encode(bodyKey.getData())); + log.debug("IV used: " + Base64.encode(iv)); + log.debug("pad: " + pad); + log.debug("remainder: " + remainder); + log.debug("internalSize: " + internalSize); + log.debug("raw.length: " + raw.length); + log.debug("tail: " + tail); + log.debug("hmac: " + Base64.encode(hmac)); + } + return rv; + } + + /** + * asymmetrically encrypt the raw data to the given key by prepending an + * ElGamal/2048 encrypted AES/256 key and IV block, followed by the + * AES/256/CBC encrypted raw data + */ + byte[] encryptBody(I2PAppContext ctx, byte raw[], PublicKey encryptTo) { + byte data[] = new byte[32+16]; + SessionKey key = ctx.keyGenerator().generateSessionKey(); + byte preIV[] = new byte[16]; + ctx.random().nextBytes(preIV); + System.arraycopy(preIV, 0, data, 0, preIV.length); + System.arraycopy(key.getData(), 0, data, preIV.length, SessionKey.KEYSIZE_BYTES); + byte enc[] = ctx.elGamalEngine().encrypt(data, encryptTo); + //System.out.println("Asym block [" + enc.length + "]:\n" + Base64.encode(enc) + "\npubKey:\n" + Base64.encode(encryptTo.getData())); + + + byte iv[] = new byte[16]; + Hash ivH = ctx.sha().calculateHash(preIV); + System.arraycopy(ivH.getData(), 0, iv, 0, iv.length); + + byte hmac[] = new byte[Hash.HASH_LENGTH]; + + int pad = ctx.random().nextInt(256); + // IV + AES-CBC(rand(nonzero) padding + 0 + internalSize + totalSize + data + rand, IV, bodyKey)+HMAC(bodySection, H(bodyKey+IV)) + int internalSize = pad + 1 + 4 + 4 + raw.length; + int remainder = 16 - (internalSize % 16); + internalSize += remainder; + + byte prep[] = new byte[internalSize]; + int off = 0; + while (off < pad) { + byte b = (byte)(0xFF & ctx.random().nextInt()); + if (b != 0) { + prep[off] = b; + off++; + } + } + prep[off] = 0; + off++; + DataHelper.toLong(prep, off, 4, raw.length); + off += 4; + DataHelper.toLong(prep, off, 4, prep.length+hmac.length); + off += 4; + System.arraycopy(raw, 0, prep, off, raw.length); + off += raw.length; + while (off < prep.length) { + byte b = (byte)(0xFF & ctx.random().nextInt()); + prep[off] = b; + off++; + } + + // ok, prepared. now lets encrypt + byte rv[] = new byte[enc.length+prep.length+hmac.length]; + System.arraycopy(enc, 0, rv, 0, enc.length); + ctx.aes().encrypt(prep, 0, rv, enc.length, key, iv, prep.length); + + // append HMAC(bodySection, H(bodyKey+IV)) + byte hmacPreKey[] = new byte[SessionKey.KEYSIZE_BYTES+iv.length]; + System.arraycopy(key.getData(), 0, hmacPreKey, 0, SessionKey.KEYSIZE_BYTES); + System.arraycopy(iv, 0, hmacPreKey, SessionKey.KEYSIZE_BYTES, iv.length); + byte hmacKey[] = ctx.sha().calculateHash(hmacPreKey).getData(); + ctx.hmac256().calculate(new SessionKey(hmacKey), rv, enc.length, prep.length, hmac, 0); + System.arraycopy(hmac, 0, rv, enc.length+prep.length, hmac.length); + + return rv; + } + + static final String strip(String orig) { return strip(orig, "\t\n\r\f", ' '); } + static final String strip(String orig, String charsToRemove, char replacement) { + boolean changed = false; + if (orig == null) return ""; + char buf[] = orig.toCharArray(); + for (int i = 0; i < buf.length; i++) { + if (charsToRemove.indexOf(buf[i]) != -1) { + buf[i] = replacement; + changed = true; + } + } + if (changed) + return new String(buf); + else + return orig; + } + + boolean verifySig(DBClient client, Signature sig, Hash hash, SigningPublicKey pubKey) { + return client.ctx().dsa().verifySignature(sig, hash, pubKey); + } +} diff --git a/src/syndie/db/DBClient.java b/src/syndie/db/DBClient.java new file mode 100644 index 0000000..0018bf0 --- /dev/null +++ b/src/syndie/db/DBClient.java @@ -0,0 +1,2246 @@ +package syndie.db; + +import java.io.*; +import java.sql.*; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Properties; +import java.util.Set; +import java.util.TreeMap; +import java.util.zip.ZipEntry; +import java.util.zip.ZipInputStream; +import java.util.zip.ZipOutputStream; +import net.i2p.data.*; +import syndie.Constants; +import syndie.data.ArchiveInfo; +import syndie.data.ChannelInfo; +import syndie.data.MessageInfo; +import syndie.data.NymKey; +import syndie.data.ReferenceNode; + +import syndie.data.SyndieURI; +import net.i2p.I2PAppContext; +import net.i2p.util.Log; + +public class DBClient { + private static final Class[] _gcjKludge = new Class[] { + org.hsqldb.jdbcDriver.class + , org.hsqldb.GCJKludge.class + , org.hsqldb.persist.GCJKludge.class + }; + private I2PAppContext _context; + private Log _log; + + private Connection _con; + private SyndieURIDAO _uriDAO; + private String _login; + private String _pass; + private long _nymId; + private File _rootDir; + private String _url; + private Thread _shutdownHook; + private boolean _shutdownInProgress; + private String _defaultArchive; + private String _httpProxyHost; + private int _httpProxyPort; + + public DBClient(I2PAppContext ctx, File rootDir) { + _context = ctx; + _log = ctx.logManager().getLog(getClass()); + _rootDir = rootDir; + _shutdownInProgress = false; + _shutdownHook = new Thread(new Thread(new Runnable() { + public void run() { + _shutdownInProgress = true; + close(); + } + }, "DB shutdown")); + } + + public void connect(String url) throws SQLException { + //System.out.println("Connecting to " + url); + _url = url; + _con = DriverManager.getConnection(url); + Runtime.getRuntime().addShutdownHook(_shutdownHook); + + initDB(); + _uriDAO = new SyndieURIDAO(this); + _login = null; + _pass = null; + _nymId = -1; + } + public long connect(String url, String login, String passphrase) throws SQLException { + connect(url); + return getNymId(login, passphrase); + } + I2PAppContext ctx() { return _context; } + Connection con() { return _con; } + + /** if logged in, the login used is returned here */ + String getLogin() { return _login; } + /** if logged in, the password authenticating it is returned here */ + String getPass() { return _pass; } + boolean isLoggedIn() { return _login != null; } + /** if logged in, the internal nymId associated with that login */ + long getLoggedInNymId() { return _nymId; } + + File getTempDir() { return new File(_rootDir, "tmp"); } + File getOutboundDir() { return new File(_rootDir, "outbound"); } + File getArchiveDir() { return new File(_rootDir, "archive"); } + + String getDefaultHTTPProxyHost() { return _httpProxyHost; } + void setDefaultHTTPProxyHost(String host) { _httpProxyHost = host; } + int getDefaultHTTPProxyPort() { return _httpProxyPort; } + void setDefaultHTTPProxyPort(int port) { _httpProxyPort = port; } + String getDefaultHTTPArchive() { return _defaultArchive; } + void setDefaultHTTPArchive(String archive) { _defaultArchive = archive; } + + public void close() { + _login = null; + _pass = null; + _nymId = -1; + _defaultArchive = null; + _httpProxyHost = null; + _httpProxyPort = -1; + try { + if (_con == null) return; + if (_con.isClosed()) return; + PreparedStatement stmt = _con.prepareStatement("SHUTDOWN"); + stmt.execute(); + if (_log.shouldLog(Log.INFO)) + _log.info("Database shutdown"); + stmt.close(); + _con.close(); + } catch (SQLException se) { + if (_log.shouldLog(Log.WARN)) + _log.warn("Error closing the connection and shutting down the database", se); + } + if (!_shutdownInProgress) + Runtime.getRuntime().removeShutdownHook(_shutdownHook); + } + + String getString(String query, int column, long keyVal) { + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(query); + stmt.setLong(1, keyVal); + rs = stmt.executeQuery(); + if (rs.next()) { + String rv = rs.getString(column); + if (!rs.wasNull()) + return rv; + } + } catch (SQLException se) { + if (_log.shouldLog(Log.WARN)) + _log.warn("Error fetching the string", se); + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + return null; + } + + public static final long NYM_ID_LOGIN_UNKNOWN = -1; + public static final long NYM_ID_PASSPHRASE_INVALID = -2; + public static final long NYM_ID_LOGIN_ALREADY_EXISTS = -3; + + private static final String SQL_GET_NYM_ID = "SELECT nymId, passSalt, passHash FROM nym WHERE login = ?"; + /** + * if the passphrase is blank, simply get the nymId for the login, otherwise + * authenticate the passphrase, returning -1 if the login doesn't exist, -2 + * if the passphrase is invalid, or the nymId if it is correct. If the nym and + * password are both set and are authenticated, they are stored in memory on + * the DBClient itself and can be queried with getLogin() and getPass(). + */ + public long getNymId(String login, String passphrase) { + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_NYM_ID); + stmt.setString(1, login); + rs = stmt.executeQuery(); + if (rs.next()) { + long nymId = rs.getLong(1); + byte salt[] = rs.getBytes(2); + byte hash[] = rs.getBytes(3); + if (passphrase == null) { + return nymId; + } else { + byte calc[] = _context.keyGenerator().generateSessionKey(salt, DataHelper.getUTF8(passphrase)).getData(); + if (DataHelper.eq(calc, hash)) { + _login = login; + _pass = passphrase; + _nymId = nymId; + return nymId; + } else { + return NYM_ID_PASSPHRASE_INVALID; + } + } + } else { + return NYM_ID_LOGIN_UNKNOWN; + } + } catch (SQLException se) { + if (_log.shouldLog(Log.WARN)) + _log.warn("Unable to check the get the nymId", se); + return NYM_ID_LOGIN_UNKNOWN; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + + private static final String SQL_INSERT_NYM = "INSERT INTO nym (nymId, login, publicName, passSalt, passHash, isDefaultUser) VALUES (?, ?, ?, ?, ?, ?)"; + public long register(String login, String passphrase, String publicName) { + long nymId = nextId("nymIdSequence"); + byte salt[] = new byte[16]; + _context.random().nextBytes(salt); + byte hash[] = _context.keyGenerator().generateSessionKey(salt, DataHelper.getUTF8(passphrase)).getData(); + + PreparedStatement stmt = null; + try { + stmt = _con.prepareStatement(SQL_INSERT_NYM); + stmt.setLong(1, nymId); + stmt.setString(2, login); + stmt.setString(3, publicName); + stmt.setBytes(4, salt); + stmt.setBytes(5, hash); + stmt.setBoolean(6, false); + int rows = stmt.executeUpdate(); + if (rows != 1) + return NYM_ID_LOGIN_ALREADY_EXISTS; + else + return nymId; + } catch (SQLException se) { + if (_log.shouldLog(Log.WARN)) + _log.warn("Unable to register the nymId", se); + return NYM_ID_LOGIN_ALREADY_EXISTS; + } finally { + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + + public long nextId(String seq) { + PreparedStatement stmt = null; + ResultSet rs = null; + try { + //String query = "SELECT NEXT VALUE FOR " + seq + " FROM information_schema.system_sequences WHERE sequence_name = '" + seq.toUpperCase() + "'"; + String query = "CALL NEXT VALUE FOR " + seq; + stmt = _con.prepareStatement(query); + rs = stmt.executeQuery(); + if (rs.next()) { + long rv = rs.getLong(1); + if (rs.wasNull()) + return -1; + else + return rv; + } else { + return -1; + } + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the next sequence ID", se); + return -1; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + + public SyndieURI getURI(long uriId) { + return _uriDAO.fetch(uriId); + } + public long addURI(SyndieURI uri) { + return _uriDAO.add(uri); + } + + public static void main(String args[]) { + DBClient client = new DBClient(I2PAppContext.getGlobalContext(), new File(TextEngine.getRootPath())); + try { + client.connect("jdbc:hsqldb:file:/tmp/testSynDB;hsqldb.nio_data_file=false"); + client.close(); + } catch (SQLException se) { + se.printStackTrace(); + } + } + + private void initDB() { + int version = checkDBVersion(); + if (_log.shouldLog(Log.DEBUG)) + _log.debug("Known DB version: " + version); + if (version < 0) + buildDB(); + int updates = getDBUpdateCount(); // syndie/db/ddl_update$n.txt + for (int i = 1; i <= updates; i++) { + if (i >= version) { + if (_log.shouldLog(Log.DEBUG)) + _log.debug("Updating database version " + i + " to " + (i+1)); + updateDB(i); + } else { + if (_log.shouldLog(Log.DEBUG)) + _log.debug("No need for update " + i + " (version: " + version + ")"); + } + } + } + private int checkDBVersion() { + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement("SELECT versionNum FROM appVersion WHERE app = 'syndie.db'"); + rs = stmt.executeQuery(); + while (rs.next()) { + int rv = rs.getInt(1); + if (!rs.wasNull()) + return rv; + } + return -1; + } catch (SQLException se) { + if (_log.shouldLog(Log.WARN)) + _log.warn("Unable to check the database version (does not exist?)", se); + return -1; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + private void buildDB() { + if (_log.shouldLog(Log.INFO)) + _log.info("Building the database..."); + try { + InputStream in = getClass().getResourceAsStream("ddl.txt"); + if (in != null) { + BufferedReader r = new BufferedReader(new InputStreamReader(in)); + StringBuffer cmdBuf = new StringBuffer(); + String line = null; + while ( (line = r.readLine()) != null) { + line = line.trim(); + if (line.startsWith("//") || line.startsWith("--")) + continue; + cmdBuf.append(' ').append(line); + if (line.endsWith(";")) { + exec(cmdBuf.toString()); + cmdBuf.setLength(0); + } + } + } + } catch (IOException ioe) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error reading the db script", ioe); + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error building the db", se); + } + } + private int getDBUpdateCount() { + int updates = 0; + while (true) { + try { + InputStream in = getClass().getResourceAsStream("ddl_update" + (updates+1) + ".txt"); + if (in != null) { + in.close(); + updates++; + } else { + if (_log.shouldLog(Log.DEBUG)) + _log.debug("There were " + updates + " database updates known for " + getClass().getName() + " ddl_update*.txt"); + return updates; + } + } catch (IOException ioe) { + if (_log.shouldLog(Log.WARN)) + _log.warn("problem listing the updates", ioe); + } + } + } + private void updateDB(int oldVersion) { + try { + InputStream in = getClass().getResourceAsStream("ddl_update" + oldVersion + ".txt"); + if (in != null) { + BufferedReader r = new BufferedReader(new InputStreamReader(in)); + StringBuffer cmdBuf = new StringBuffer(); + String line = null; + while ( (line = r.readLine()) != null) { + line = line.trim(); + if (line.startsWith("//") || line.startsWith("--")) + continue; + cmdBuf.append(' ').append(line); + if (line.endsWith(";")) { + exec(cmdBuf.toString()); + cmdBuf.setLength(0); + } + } + } + } catch (IOException ioe) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error reading the db script", ioe); + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error building the db", se); + } + } + private void exec(String cmd) throws SQLException { + if (_log.shouldLog(Log.DEBUG)) + _log.debug("Exec [" + cmd + "]"); + PreparedStatement stmt = null; + try { + stmt = _con.prepareStatement(cmd); + stmt.executeUpdate(); + } finally { + if (stmt != null) stmt.close(); + } + } + public int exec(String sql, long param1) throws SQLException { + if (_log.shouldLog(Log.DEBUG)) + _log.debug("Exec param [" + sql + "]"); + PreparedStatement stmt = null; + try { + stmt = _con.prepareStatement(sql); + stmt.setLong(1, param1); + return stmt.executeUpdate(); + } finally { + if (stmt != null) stmt.close(); + } + } + public void exec(String query, UI ui) { + ui.debugMessage("Executing [" + query + "]"); + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(query); + String up = query.toUpperCase(); + if (!up.startsWith("SELECT") && !up.startsWith("CALL")) { + int rows = stmt.executeUpdate(); + ui.statusMessage("Command completed, updating " + rows + " rows"); + ui.commandComplete(rows, null); + return; + } + rs = stmt.executeQuery(); + ResultSetMetaData md = stmt.getMetaData(); + int rows = 0; + while (rs.next()) { + rows++; + ui.statusMessage("----------------------------------------------------------"); + for (int i = 0; i < md.getColumnCount(); i++) { + Object obj = rs.getObject(i+1); + if (obj != null) { + if (obj instanceof byte[]) { + String str = Base64.encode((byte[])obj); + if (str.length() <= 32) + ui.statusMessage(md.getColumnLabel(i+1) + ":\t" + str); + else + ui.statusMessage(md.getColumnLabel(i+1) + ":\t" + str.substring(0,32) + "..."); + } else { + ui.statusMessage(md.getColumnLabel(i+1) + ":\t" + obj.toString()); + } + } else { + ui.statusMessage(md.getColumnLabel(i+1) + ":\t[null value]"); + } + } + } + ui.statusMessage("Rows matching the query: " + rows); + ui.commandComplete(rows, null); + } catch (SQLException se) { + ui.errorMessage("Error executing the query", se); + ui.commandComplete(-1, null); + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + + private static final String SQL_GET_READKEYS = "SELECT keyType, keyData, keySalt, authenticated, keyPeriodBegin, keyPeriodEnd " + + "FROM nymKey WHERE " + + "keyChannel = ? AND nymId = ? AND keyFunction = '" + Constants.KEY_FUNCTION_READ + "'"; + private static final String SQL_GET_CHANREADKEYS = "SELECT keyData, keyStart FROM channelReadKey WHERE channelId = ? ORDER BY keyStart ASC"; + /** + * list of SessionKey instances that the nym specified can use to try and read/write + * posts to the given identHash channel + */ + public List getReadKeys(Hash identHash, long nymId, String nymPassphrase) { + List rv = new ArrayList(1); + byte pass[] = DataHelper.getUTF8(nymPassphrase); + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_READKEYS); + stmt.setBytes(1, identHash.getData()); + stmt.setLong(2, nymId); + rs = stmt.executeQuery(); + while (rs.next()) { + String type = rs.getString(1); + byte data[] = rs.getBytes(2); + byte salt[] = rs.getBytes(3); + boolean auth= rs.getBoolean(4); + Date begin = rs.getDate(5); + Date end = rs.getDate(6); + + if (Constants.KEY_TYPE_AES256.equals(type)) { + if (salt != null) { + byte readKey[] = new byte[SessionKey.KEYSIZE_BYTES]; + SessionKey saltedKey = _context.keyGenerator().generateSessionKey(salt, pass); + _context.aes().decrypt(data, 0, readKey, 0, saltedKey, salt, data.length); + int pad = (int)readKey[readKey.length-1]; + byte key[] = new byte[readKey.length-pad]; + System.arraycopy(readKey, 0, key, 0, key.length); + rv.add(new SessionKey(key)); + } else { + rv.add(new SessionKey(data)); + } + } else { + // we dont know how to deal with anything but AES256 + } + } + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the read keys", se); + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + + // ok, that covers nym-local keys, now lets look for any channelReadKeys that came from + // signed channel metadata + long channelId = getChannelId(identHash); + try { + stmt = _con.prepareStatement(SQL_GET_CHANREADKEYS); + //stmt.setBytes(1, identHash.getData()); + stmt.setLong(1, channelId); + rs = stmt.executeQuery(); + while (rs.next()) { + byte key[] = rs.getBytes(1); + if ( (key != null) && (key.length == SessionKey.KEYSIZE_BYTES) ) + rv.add(new SessionKey(key)); + } + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the channel read keys", se); + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + return rv; + } + + private static final String SQL_GET_KNOWN_EDITION = "SELECT MAX(edition) FROM channel WHERE channelHash = ?"; + /** highest channel meta edition, or -1 if unknown */ + public long getKnownEdition(Hash ident) { + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_KNOWN_EDITION); + stmt.setBytes(1, ident.getData()); + rs = stmt.executeQuery(); + if (rs.next()) { + long edition = rs.getLong(1); + if (rs.wasNull()) + return -1; + else + return edition; + } else { + return -1; + } + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the channel's meta edition", se); + return -1; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + + private static final String SQL_GET_CHANNEL_IDS = "SELECT channelId, channelHash FROM channel ORDER BY channelHash"; + /** retrieve a mapping of channelId (Long) to channel hash (Hash) */ + public Map getChannelIds() { + Map rv = new HashMap(); + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_CHANNEL_IDS); + rs = stmt.executeQuery(); + while (rs.next()) { + long id = rs.getLong(1); + if (rs.wasNull()) + continue; + byte hash[] = rs.getBytes(2); + if (rs.wasNull()) + continue; + if (hash.length != Hash.HASH_LENGTH) + continue; + rv.put(new Long(id), new Hash(hash)); + } + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the channel list", se); + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + return rv; + } + + private static final String SQL_GET_CHANNEL_ID = "SELECT channelId FROM channel WHERE channelHash = ?"; + public long getChannelId(Hash channel) { + if (channel == null) return -1; + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_CHANNEL_ID); + stmt.setBytes(1, channel.getData()); + rs = stmt.executeQuery(); + if (rs.next()) { + long id = rs.getLong(1); + if (rs.wasNull()) + return -1; + else + return id; + } else { + return -1; + } + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the channel id", se); + return -1; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + + private static final String SQL_GET_SIGNKEYS = "SELECT keyType, keyData, keySalt, authenticated, keyPeriodBegin, keyPeriodEnd " + + "FROM nymKey WHERE " + + "keyChannel = ? AND nymId = ? AND "+ + "(keyFunction = '" + Constants.KEY_FUNCTION_MANAGE + "' OR keyFunction = '" + Constants.KEY_FUNCTION_POST + "')"; + /** + * list of SigningPrivateKey instances that the nym specified can use to + * try and authenticate/authorize posts to the given identHash channel + */ + public List getSignKeys(Hash identHash, long nymId, String nymPassphrase) { + ensureLoggedIn(); + if (identHash == null) throw new IllegalArgumentException("you need an identHash (or you should use getNymKeys())"); + List rv = new ArrayList(1); + byte pass[] = DataHelper.getUTF8(nymPassphrase); + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_SIGNKEYS); + stmt.setBytes(1, identHash.getData()); + stmt.setLong(2, nymId); + rs = stmt.executeQuery(); + while (rs.next()) { + String type = rs.getString(1); + byte data[] = rs.getBytes(2); + byte salt[] = rs.getBytes(3); + boolean auth= rs.getBoolean(4); + Date begin = rs.getDate(5); + Date end = rs.getDate(6); + + if (Constants.KEY_TYPE_DSA.equals(type)) { + if (salt != null) { + byte readKey[] = new byte[data.length]; + SessionKey saltedKey = _context.keyGenerator().generateSessionKey(salt, pass); + _context.aes().decrypt(data, 0, readKey, 0, saltedKey, salt, data.length); + int pad = (int)readKey[readKey.length-1]; + byte key[] = new byte[readKey.length-pad]; + System.arraycopy(readKey, 0, key, 0, key.length); + rv.add(new SigningPrivateKey(key)); + } else { + rv.add(new SigningPrivateKey(data)); + } + } else { + // we dont know how to deal with anything but DSA signing keys + } + } + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the signing keys", se); + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + return rv; + } + + private static final String SQL_GET_REPLY_KEY = "SELECT encryptKey FROM channel WHERE channelId = ?"; + public PublicKey getReplyKey(long channelId) { + ensureLoggedIn(); + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_REPLY_KEY); + stmt.setLong(1, channelId); + rs = stmt.executeQuery(); + if (rs.next()) { + byte rv[] = rs.getBytes(1); + if (rs.wasNull()) + return null; + else + return new PublicKey(rv); + } else { + return null; + } + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the channel's reply key", se); + return null; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + + private static final String SQL_GET_NYMKEYS = "SELECT keyType, keyData, keySalt, authenticated, keyPeriodBegin, keyPeriodEnd, keyFunction, keyChannel " + + "FROM nymKey WHERE nymId = ?"; + /** return a list of NymKey structures */ + public List getNymKeys(long nymId, String pass, Hash channel, String keyFunction) { + ensureLoggedIn(); + List rv = new ArrayList(1); + byte passB[] = DataHelper.getUTF8(pass); + PreparedStatement stmt = null; + ResultSet rs = null; + try { + String query = SQL_GET_NYMKEYS; + if (channel != null) + query = query + " AND keyChannel = ?"; + if (keyFunction != null) + query = query + " AND keyFunction = ?"; + stmt = _con.prepareStatement(query); + stmt.setLong(1, nymId); + if (channel != null) { + stmt.setBytes(2, channel.getData()); + if (keyFunction != null) + stmt.setString(3, keyFunction); + } else if (keyFunction != null) { + stmt.setString(2, keyFunction); + } + + rs = stmt.executeQuery(); + while (rs.next()) { + String type = rs.getString(1); + byte data[] = rs.getBytes(2); + byte salt[] = rs.getBytes(3); + boolean auth= rs.getBoolean(4); + Date begin = rs.getDate(5); + Date end = rs.getDate(6); + String function = rs.getString(7); + byte chan[] = rs.getBytes(8); + + if (salt != null) { + SessionKey saltedKey = _context.keyGenerator().generateSessionKey(salt, passB); + //_log.debug("salt: " + Base64.encode(salt)); + //_log.debug("passB: " + Base64.encode(passB)); + //_log.debug("encrypted: " + Base64.encode(data)); + byte decr[] = new byte[data.length]; + _context.aes().decrypt(data, 0, decr, 0, saltedKey, salt, data.length); + int pad = (int)decr[decr.length-1]; + //_log.debug("pad: " + pad); + byte key[] = new byte[decr.length-pad]; + System.arraycopy(decr, 0, key, 0, key.length); + //_log.debug("key: " + Base64.encode(key)); + data = key; + } + + rv.add(new NymKey(type, data, _context.sha().calculateHash(data).toBase64(), auth, function, nymId, (chan != null ? new Hash(chan) : null))); + } + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the keys", se); + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + return rv; + } + + public List getReplyKeys(Hash identHash, long nymId, String pass) { + List keys = getNymKeys(nymId, pass, identHash, Constants.KEY_FUNCTION_REPLY); + List rv = new ArrayList(); + for (int i = 0; i < keys.size(); i++) + rv.add(new PrivateKey(((NymKey)keys.get(i)).getData())); + return rv; + } + + private static final String SQL_GET_AUTHORIZED_POSTERS = "SELECT identKey FROM channel WHERE channelId = ?" + + " UNION " + + "SELECT authPubKey FROM channelPostKey WHERE channelId = ?" + + " UNION " + + "SELECT authPubKey FROM channelManageKey WHERE channelId = ?"; + public List getAuthorizedPosters(Hash channel) { + ensureLoggedIn(); + long channelId = getChannelId(channel); + List rv = new ArrayList(); + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_AUTHORIZED_POSTERS); + stmt.setLong(1, channelId); + stmt.setLong(2, channelId); + stmt.setLong(3, channelId); + rs = stmt.executeQuery(); + while (rs.next()) { + byte key[] = rs.getBytes(1); + if (rs.wasNull()) { + continue; + } else { + SigningPublicKey pub = new SigningPublicKey(key); + if (!rv.contains(pub)) + rv.add(pub); + } + } + rs.close(); + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the channel's authorized posting keys", se); + return null; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + return rv; + } + + private static final String SQL_GET_IDENT_KEY = "SELECT identKey FROM channel WHERE channelHash = ?"; + public SigningPublicKey getIdentKey(Hash hash) { + ensureLoggedIn(); + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_IDENT_KEY); + stmt.setBytes(1, hash.getData()); + rs = stmt.executeQuery(); + if (rs.next()) { + byte rv[] = rs.getBytes(1); + if (rs.wasNull()) + return null; + else + return new SigningPublicKey(rv); + } else { + return null; + } + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the channel's ident key", se); + return null; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + + /* + private static final String SQL_GET_INTERNAL_MESSAGE_ID_FULL = "SELECT msgId FROM channelMessage WHERE authorChannelHash = ? AND messageId = ? AND targetChannelId = ?"; + private static final String SQL_GET_INTERNAL_MESSAGE_ID_NOAUTH = "SELECT msgId FROM channelMessage WHERE authorChannelHash IS NULL AND messageId = ? AND targetChannelId = ?"; + private static final String SQL_GET_INTERNAL_MESSAGE_ID_NOMSG = "SELECT msgId FROM channelMessage WHERE authorChannelHash = ? AND messageId IS NULL AND targetChannelId = ?"; + long getInternalMessageId(Hash author, long targetChannelId, Long messageId) { + PreparedStatement stmt = null; + ResultSet rs = null; + try { + if ( (author != null) && (messageId != null) ) { + stmt = _con.prepareStatement(SQL_GET_INTERNAL_MESSAGE_ID_FULL); + stmt.setBytes(1, author.getData()); + stmt.setLong(2, messageId.longValue()); + stmt.setLong(3, targetChannelId); + } else if ( (author == null) && (messageId != null) ) { + stmt = _con.prepareStatement(SQL_GET_INTERNAL_MESSAGE_ID_NOAUTH); + stmt.setLong(1, messageId.longValue()); + stmt.setLong(2, targetChannelId); + } else if ( (author != null) && (messageId == null) ) { + stmt = _con.prepareStatement(SQL_GET_INTERNAL_MESSAGE_ID_NOMSG); + stmt.setBytes(1, author.getData()); + stmt.setLong(2, targetChannelId); + } else { + return -1; + } + rs = stmt.executeQuery(); + if (rs.next()) { + long rv = rs.getLong(1); + if (rs.wasNull()) + return -1; + else + return rv; + } else { + return -1; + } + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the internal message id", se); + return -1; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + */ + + private static final String SQL_GET_CHANNEL_INFO = "SELECT channelId, channelHash, identKey, encryptKey, edition, name, description, allowPubPost, allowPubReply, expiration, readKeyMissing, pbePrompt FROM channel WHERE channelId = ?"; + private static final String SQL_GET_CHANNEL_TAG = "SELECT tag, wasEncrypted FROM channelTag WHERE channelId = ?"; + private static final String SQL_GET_CHANNEL_POST_KEYS = "SELECT authPubKey FROM channelPostKey WHERE channelId = ?"; + private static final String SQL_GET_CHANNEL_MANAGE_KEYS = "SELECT authPubKey FROM channelManageKey WHERE channelId = ?"; + private static final String SQL_GET_CHANNEL_ARCHIVES = "SELECT archiveId, wasEncrypted FROM channelArchive WHERE channelId = ?"; + private static final String SQL_GET_CHANNEL_READ_KEYS = "SELECT keyData FROM channelReadKey WHERE channelId = ?"; + private static final String SQL_GET_CHANNEL_META_HEADERS = "SELECT headerName, headerValue, wasEncrypted FROM channelMetaHeader WHERE channelId = ? ORDER BY headerName"; + private static final String SQL_GET_CHANNEL_REFERENCES = "SELECT groupId, parentGroupId, siblingOrder, name, description, uriId, referenceType, wasEncrypted FROM channelReferenceGroup WHERE channelId = ? ORDER BY parentGroupId ASC, siblingOrder ASC"; + public ChannelInfo getChannel(long channelId) { + ensureLoggedIn(); + ChannelInfo info = new ChannelInfo(); + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_CHANNEL_INFO); + stmt.setLong(1, channelId); + rs = stmt.executeQuery(); + if (rs.next()) { + // channelId, channelHash, identKey, encryptKey, edition, name, + // description, allowPubPost, allowPubReply, expiration, readKeyMissing, pbePrompt + byte chanHash[] = rs.getBytes(2); + byte identKey[] = rs.getBytes(3); + byte encryptKey[] = rs.getBytes(4); + long edition = rs.getLong(5); + if (rs.wasNull()) edition = -1; + String name = rs.getString(6); + String desc = rs.getString(7); + boolean allowPost = rs.getBoolean(8); + if (rs.wasNull()) allowPost = false; + boolean allowReply = rs.getBoolean(9); + if (rs.wasNull()) allowReply = false; + java.sql.Date exp = rs.getDate(10); + boolean readKeyMissing = rs.getBoolean(11); + if (rs.wasNull()) readKeyMissing = false; + String pbePrompt = rs.getString(12); + + info.setChannelId(channelId); + info.setChannelHash(new Hash(chanHash)); + info.setIdentKey(new SigningPublicKey(identKey)); + info.setEncryptKey(new PublicKey(encryptKey)); + info.setEdition(edition); + info.setName(name); + info.setDescription(desc); + info.setAllowPublicPosts(allowPost); + info.setAllowPublicReplies(allowReply); + if (exp != null) + info.setExpiration(exp.getTime()); + else + info.setExpiration(-1); + info.setReadKeyUnknown(readKeyMissing); + info.setPassphrasePrompt(pbePrompt); + } else { + return null; + } + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the channel's info", se); + return null; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + + stmt = null; + rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_CHANNEL_TAG); + stmt.setLong(1, channelId); + rs = stmt.executeQuery(); + Set encrypted = new HashSet(); + Set unencrypted = new HashSet(); + while (rs.next()) { + // tag, wasEncrypted + String tag = rs.getString(1); + boolean enc = rs.getBoolean(2); + if (rs.wasNull()) + enc = true; + if (enc) + encrypted.add(tag); + else + unencrypted.add(tag); + } + info.setPublicTags(unencrypted); + info.setPrivateTags(encrypted); + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the channel's tags", se); + return null; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + + stmt = null; + rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_CHANNEL_POST_KEYS); + stmt.setLong(1, channelId); + rs = stmt.executeQuery(); + Set keys = new HashSet(); + while (rs.next()) { + // authPub + byte key[] = rs.getBytes(1); + if (!rs.wasNull()) + keys.add(new SigningPublicKey(key)); + } + info.setAuthorizedPosters(keys); + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the channel's posters", se); + return null; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + + stmt = null; + rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_CHANNEL_MANAGE_KEYS); + stmt.setLong(1, channelId); + rs = stmt.executeQuery(); + Set keys = new HashSet(); + while (rs.next()) { + // authPub + byte key[] = rs.getBytes(1); + if (!rs.wasNull()) + keys.add(new SigningPublicKey(key)); + } + info.setAuthorizedManagers(keys); + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the channel's managers", se); + return null; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + + stmt = null; + rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_CHANNEL_ARCHIVES); + stmt.setLong(1, channelId); + rs = stmt.executeQuery(); + Set pubIds = new HashSet(); + Set privIds = new HashSet(); + while (rs.next()) { + // archiveId, wasEncrypted + long archiveId = rs.getLong(1); + if (rs.wasNull()) + archiveId = -1; + boolean enc = rs.getBoolean(2); + if (rs.wasNull()) + enc = true; + if (enc) + privIds.add(new Long(archiveId)); + else + pubIds.add(new Long(archiveId)); + } + rs.close(); + rs = null; + stmt.close(); + stmt = null; + + Set pub = new HashSet(); + Set priv = new HashSet(); + for (Iterator iter = pubIds.iterator(); iter.hasNext(); ) { + Long id = (Long)iter.next(); + ArchiveInfo archive = getArchive(id.longValue()); + if (archive != null) + pub.add(archive); + } + for (Iterator iter = privIds.iterator(); iter.hasNext(); ) { + Long id = (Long)iter.next(); + ArchiveInfo archive = getArchive(id.longValue()); + if (archive != null) + priv.add(archive); + } + + info.setPublicArchives(pub); + info.setPrivateArchives(priv); + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the channel's managers", se); + return null; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + + stmt = null; + rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_CHANNEL_READ_KEYS); + stmt.setLong(1, channelId); + rs = stmt.executeQuery(); + Set keys = new HashSet(); + while (rs.next()) { + // readKey + byte key[] = rs.getBytes(1); + if (!rs.wasNull()) + keys.add(new SessionKey(key)); + } + info.setReadKeys(keys); + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the channel's managers", se); + return null; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + + stmt = null; + rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_CHANNEL_META_HEADERS); + stmt.setLong(1, channelId); + rs = stmt.executeQuery(); + Properties pub = new Properties(); + Properties priv = new Properties(); + while (rs.next()) { + // headerName, headerValue, wasEncrypted + String name = rs.getString(1); + String val = rs.getString(2); + boolean enc = rs.getBoolean(3); + if (rs.wasNull()) + enc = true; + if (enc) + priv.setProperty(name, val); + else + pub.setProperty(name, val); + } + info.setPublicHeaders(pub); + info.setPrivateHeaders(priv); + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the channel's managers", se); + return null; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + + + stmt = null; + rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_CHANNEL_REFERENCES); + stmt.setLong(1, channelId); + rs = stmt.executeQuery(); + List refs = new ArrayList(); + while (rs.next()) { + // groupId, parentGroupId, siblingOrder, name, description, + // uriId, referenceType, wasEncrypted + + // ORDER BY parentGroupId, siblingOrder + long groupId = rs.getLong(1); + if (rs.wasNull()) groupId = -1; + long parentGroupId = rs.getLong(2); + if (rs.wasNull()) parentGroupId = -1; + int order = rs.getInt(3); + if (rs.wasNull()) order = 0; + String name = rs.getString(4); + String desc = rs.getString(5); + long uriId = rs.getLong(6); + if (rs.wasNull()) uriId = -1; + String type = rs.getString(7); + boolean enc = rs.getBoolean(8); + if (rs.wasNull()) enc = true; + + SyndieURI uri = getURI(uriId); + DBReferenceNode ref = new DBReferenceNode(name, uri, desc, type, uriId, groupId, parentGroupId, order, enc); + boolean parentFound = false; + for (int i = 0; i < refs.size(); i++) { + DBReferenceNode cur = (DBReferenceNode)refs.get(i); + if (cur.getGroupId() == parentGroupId) { + cur.addChild(ref); + parentFound = true; + } + } + if (!parentFound) + refs.add(ref); // rewt + } + info.setReferences(refs); + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the channel's managers", se); + return null; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + + return info; + } + + private class DBReferenceNode extends ReferenceNode { + private long _uriId; + private long _groupId; + private long _parentGroupId; + private int _siblingOrder; + private boolean _encrypted; + + public DBReferenceNode(String name, SyndieURI uri, String description, String type, long uriId, long groupId, long parentGroupId, int siblingOrder, boolean encrypted) { + super(name, uri, description, type); + _uriId = uriId; + _groupId = groupId; + _parentGroupId = parentGroupId; + _siblingOrder = siblingOrder; + _encrypted = encrypted; + } + public long getURIId() { return _uriId; } + public long getGroupId() { return _groupId; } + public long getParentGroupId() { return _parentGroupId; } + public int getSiblingOrder() { return _siblingOrder; } + public boolean getEncrypted() { return _encrypted; } + } + + private static final String SQL_GET_ARCHIVE = "SELECT postAllowed, readAllowed, uriId FROM archive WHERE archiveId = ?"; + private ArchiveInfo getArchive(long archiveId) { + ensureLoggedIn(); + ArchiveInfo info = new ArchiveInfo(); + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_ARCHIVE); + stmt.setLong(1, archiveId); + rs = stmt.executeQuery(); + Set encrypted = new HashSet(); + Set unencrypted = new HashSet(); + while (rs.next()) { + // postAllowed, readAllowed, uriId + boolean post = rs.getBoolean(1); + if (rs.wasNull()) post = false; + boolean read = rs.getBoolean(2); + if (rs.wasNull()) read = false; + long uriId = rs.getLong(3); + if (rs.wasNull()) uriId = -1; + if (uriId >= 0) { + SyndieURI uri = getURI(uriId); + info.setArchiveId(archiveId); + info.setPostAllowed(post); + info.setReadAllowed(read); + info.setURI(uri); + return info; + } + } + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the archive", se); + return null; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + return null; + } + + private static final String SQL_GET_MESSAGES_PRIVATE = "SELECT msgId, messageId FROM channelMessage WHERE targetChannelId = ? AND wasPrivate = TRUE AND wasAuthenticated = TRUE ORDER BY messageId ASC"; + public List getMessageIdsPrivate(Hash chan) { + ensureLoggedIn(); + List rv = new ArrayList(); + long chanId = getChannelId(chan); + if (chanId >= 0) { + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_MESSAGES_PRIVATE); + stmt.setLong(1, chanId); + rs = stmt.executeQuery(); + while (rs.next()) { + // msgId, messageId + long msgId = rs.getLong(1); + if (!rs.wasNull()) + rv.add(new Long(msgId)); + } + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the message list", se); + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + + } + return rv; + } + + private static final String SQL_GET_MESSAGES_AUTHORIZED = "SELECT msgId, messageId FROM channelMessage WHERE targetChannelId = ? AND wasPrivate = FALSE AND wasAuthorized = TRUE ORDER BY messageId ASC"; + public List getMessageIdsAuthorized(Hash chan) { + ensureLoggedIn(); + List rv = new ArrayList(); + long chanId = getChannelId(chan); + if (chanId >= 0) { + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_MESSAGES_AUTHORIZED); + stmt.setLong(1, chanId); + rs = stmt.executeQuery(); + while (rs.next()) { + // msgId, messageId + long msgId = rs.getLong(1); + if (!rs.wasNull()) + rv.add(new Long(msgId)); + } + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the message list", se); + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + + } + return rv; + } + private static final String SQL_GET_MESSAGES_AUTHENTICATED = "SELECT msgId, messageId FROM channelMessage WHERE targetChannelId = ? AND wasPrivate = FALSE AND wasAuthorized = FALSE AND wasAuthenticated = TRUE ORDER BY messageId ASC"; + public List getMessageIdsAuthenticated(Hash chan) { + ensureLoggedIn(); + List rv = new ArrayList(); + long chanId = getChannelId(chan); + if (chanId >= 0) { + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_MESSAGES_AUTHENTICATED); + stmt.setLong(1, chanId); + rs = stmt.executeQuery(); + while (rs.next()) { + // msgId, messageId + long msgId = rs.getLong(1); + if (!rs.wasNull()) + rv.add(new Long(msgId)); + } + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the message list", se); + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + + } + return rv; + } + private static final String SQL_GET_MESSAGES_UNAUTHENTICATED = "SELECT msgId, messageId FROM channelMessage WHERE targetChannelId = ? AND wasPrivate = FALSE AND wasAuthorized = FALSE AND wasAuthenticated = FALSE ORDER BY messageId ASC"; + public List getMessageIdsUnauthenticated(Hash chan) { + ensureLoggedIn(); + List rv = new ArrayList(); + long chanId = getChannelId(chan); + if (chanId >= 0) { + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_MESSAGES_UNAUTHENTICATED); + stmt.setLong(1, chanId); + rs = stmt.executeQuery(); + while (rs.next()) { + // msgId, messageId + long msgId = rs.getLong(1); + if (!rs.wasNull()) + rv.add(new Long(msgId)); + } + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the message list", se); + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + + } + return rv; + } + + + private static final String SQL_GET_INTERNAL_MESSAGE_ID = "SELECT msgId FROM channelMessage WHERE scopeChannelId = ? AND messageId = ?"; + public MessageInfo getMessage(long scopeId, Long messageId) { + ensureLoggedIn(); + if (messageId == null) return null; + return getMessage(scopeId, messageId.longValue()); + } + public MessageInfo getMessage(long scopeId, long messageId) { + long msgId = getMessageId(scopeId, messageId); + if (msgId >= 0) + return getMessage(msgId); + else + return null; + } + public long getMessageId(long scopeId, long messageId) { + long msgId = -1; + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_INTERNAL_MESSAGE_ID); + stmt.setLong(1, scopeId); + stmt.setLong(2, messageId); + rs = stmt.executeQuery(); + if (rs.next()) { + msgId = rs.getLong(1); + if (rs.wasNull()) + msgId = -1; + } + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the message's id", se); + return -1; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + return msgId; + } + + private static final String SQL_GET_MESSAGE_INFO = "SELECT authorChannelId, messageId, targetChannelId, subject, overwriteScopeHash, overwriteMessageId, " + + "forceNewThread, refuseReplies, wasEncrypted, wasPrivate, wasAuthorized, wasAuthenticated, isCancelled, expiration, scopeChannelId, wasPBE, readKeyMissing, replyKeyMissing, pbePrompt " + + "FROM channelMessage WHERE msgId = ?"; + private static final String SQL_GET_MESSAGE_HIERARCHY = "SELECT referencedChannelHash, referencedMessageId FROM messageHierarchy WHERE msgId = ? ORDER BY referencedCloseness ASC"; + private static final String SQL_GET_MESSAGE_TAG = "SELECT tag, isPublic FROM messageTag WHERE msgId = ?"; + private static final String SQL_GET_MESSAGE_PAGE_COUNT = "SELECT COUNT(*) FROM messagePage WHERE msgId = ?"; + private static final String SQL_GET_MESSAGE_ATTACHMENT_COUNT = "SELECT COUNT(*) FROM messageAttachment WHERE msgId = ?"; + public MessageInfo getMessage(long internalMessageId) { + ensureLoggedIn(); + MessageInfo info = new MessageInfo(); + info.setInternalId(internalMessageId); + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_MESSAGE_INFO); + stmt.setLong(1, internalMessageId); + rs = stmt.executeQuery(); + if (rs.next()) { + // authorChannelId, messageId, targetChannelId, subject, overwriteScopeHash, overwriteMessageId, + // forceNewThread, refuseReplies, wasEncrypted, wasPrivate, wasAuthorized, + // wasAuthenticated, isCancelled, expiration, scopeChannelId, wasPBE + long authorId = rs.getLong(1); + if (rs.wasNull()) authorId = -1; + //byte author[] = rs.getBytes(1); + long messageId = rs.getLong(2); + if (rs.wasNull()) messageId = -1; + long targetChannelId = rs.getLong(3); + String subject = rs.getString(4); + byte overwriteChannel[] = rs.getBytes(5); + long overwriteMessage = rs.getLong(6); + if (rs.wasNull()) overwriteMessage = -1; + boolean forceNewThread = rs.getBoolean(7); + if (rs.wasNull()) forceNewThread = false; + boolean refuseReplies = rs.getBoolean(8); + if (rs.wasNull()) refuseReplies = false; + boolean wasEncrypted = rs.getBoolean(9); + if (rs.wasNull()) wasEncrypted = true; + boolean wasPrivate = rs.getBoolean(10); + if (rs.wasNull()) wasPrivate = false; + boolean wasAuthorized = rs.getBoolean(11); + if (rs.wasNull()) wasAuthorized = false; + boolean wasAuthenticated = rs.getBoolean(12); + if (rs.wasNull()) wasAuthenticated = false; + boolean cancelled = rs.getBoolean(13); + if (rs.wasNull()) cancelled = false; + java.sql.Date exp = rs.getDate(14); + long scopeChannelId = rs.getLong(15); + boolean wasPBE = rs.getBoolean(16); + if (rs.wasNull()) + wasPBE = false; + + boolean readKeyMissing = rs.getBoolean(17); + if (rs.wasNull()) readKeyMissing = false; + boolean replyKeyMissing = rs.getBoolean(18); + if (rs.wasNull()) replyKeyMissing = false; + String pbePrompt = rs.getString(19); + info.setReadKeyUnknown(readKeyMissing); + info.setReplyKeyUnknown(replyKeyMissing); + info.setPassphrasePrompt(pbePrompt); + + if (authorId >= 0) info.setAuthorChannelId(authorId); + //if (author != null) info.setAuthorChannel(new Hash(author)); + info.setMessageId(messageId); + info.setScopeChannelId(scopeChannelId); + ChannelInfo scope = getChannel(scopeChannelId); + if (scope != null) + info.setURI(SyndieURI.createMessage(scope.getChannelHash(), messageId)); + info.setTargetChannelId(targetChannelId); + ChannelInfo chan = getChannel(targetChannelId); + if (chan != null) + info.setTargetChannel(chan.getChannelHash()); + info.setSubject(subject); + if ( (overwriteChannel != null) && (overwriteMessage >= 0) ) { + info.setOverwriteChannel(new Hash(overwriteChannel)); + info.setOverwriteMessage(overwriteMessage); + } + info.setForceNewThread(forceNewThread); + info.setRefuseReplies(refuseReplies); + info.setWasEncrypted(wasEncrypted); + info.setWasPassphraseProtected(wasPBE); + info.setWasPrivate(wasPrivate); + info.setWasAuthorized(wasAuthorized); + info.setWasAuthenticated(wasAuthenticated); + info.setIsCancelled(cancelled); + if (exp != null) + info.setExpiration(exp.getTime()); + else + info.setExpiration(-1); + } else { + return null; + } + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the message's info", se); + return null; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + + stmt = null; + rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_MESSAGE_HIERARCHY); + stmt.setLong(1, internalMessageId); + rs = stmt.executeQuery(); + List uris = new ArrayList(); + while (rs.next()) { + // referencedChannelHash, referencedMessageId + byte chan[] = rs.getBytes(1); + long refId = rs.getLong(2); + if (!rs.wasNull() && (chan != null) ) + uris.add(SyndieURI.createMessage(new Hash(chan), refId)); + } + info.setHierarchy(uris); + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the message list", se); + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + + stmt = null; + rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_MESSAGE_TAG); + stmt.setLong(1, internalMessageId); + rs = stmt.executeQuery(); + Set encrypted = new HashSet(); + Set unencrypted = new HashSet(); + while (rs.next()) { + // tag, wasEncrypted + String tag = rs.getString(1); + boolean isPublic = rs.getBoolean(2); + if (rs.wasNull()) + isPublic = false; + if (isPublic) + unencrypted.add(tag); + else + encrypted.add(tag); + } + info.setPublicTags(unencrypted); + info.setPrivateTags(encrypted); + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the message's tags", se); + return null; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + + stmt = null; + rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_MESSAGE_PAGE_COUNT); + stmt.setLong(1, internalMessageId); + rs = stmt.executeQuery(); + if (rs.next()) { + int pages = rs.getInt(1); + if (!rs.wasNull()) + info.setPageCount(pages); + } else { + info.setPageCount(0); + } + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the message's tags", se); + return null; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + + stmt = null; + rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_MESSAGE_ATTACHMENT_COUNT); + stmt.setLong(1, internalMessageId); + rs = stmt.executeQuery(); + if (rs.next()) { + int pages = rs.getInt(1); + if (!rs.wasNull()) + info.setAttachmentCount(pages); + } else { + info.setAttachmentCount(0); + } + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the message's tags", se); + return null; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + + // get the refs... + MessageReferenceBuilder builder = new MessageReferenceBuilder(this); + try { + info.setReferences(builder.loadReferences(internalMessageId)); + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the message references", se); + return null; + } + + return info; + } + + private static final String SQL_GET_MESSAGE_PAGE_DATA = "SELECT dataString FROM messagePageData WHERE msgId = ? AND pageNum = ?"; + public String getMessagePageData(long internalMessageId, int pageNum) { + ensureLoggedIn(); + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_MESSAGE_PAGE_DATA); + stmt.setLong(1, internalMessageId); + stmt.setInt(2, pageNum); + rs = stmt.executeQuery(); + if (rs.next()) + return rs.getString(1); + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the page data", se); + return null; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + return null; + } + + private static final String SQL_GET_MESSAGE_PAGE_CONFIG = "SELECT dataString FROM messagePageConfig WHERE msgId = ? AND pageNum = ?"; + public String getMessagePageConfig(long internalMessageId, int pageNum) { + ensureLoggedIn(); + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_MESSAGE_PAGE_CONFIG); + stmt.setLong(1, internalMessageId); + stmt.setInt(2, pageNum); + rs = stmt.executeQuery(); + if (rs.next()) + return rs.getString(1); + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the page config", se); + return null; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + return null; + } + + private static final String SQL_GET_MESSAGE_ATTACHMENT_DATA = "SELECT dataBinary FROM messageAttachmentData WHERE msgId = ? AND attachmentNum = ?"; + public byte[] getMessageAttachmentData(long internalMessageId, int attachmentNum) { + ensureLoggedIn(); + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_MESSAGE_ATTACHMENT_DATA); + stmt.setLong(1, internalMessageId); + stmt.setInt(2, attachmentNum); + rs = stmt.executeQuery(); + if (rs.next()) + return rs.getBytes(1); + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the attachment data", se); + return null; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + return null; + } + + private static final String SQL_GET_MESSAGE_ATTACHMENT_CONFIG = "SELECT dataString FROM messageAttachmentConfig WHERE msgId = ? AND attachmentNum = ?"; + public String getMessageAttachmentConfig(long internalMessageId, int attachmentNum) { + ensureLoggedIn(); + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_MESSAGE_ATTACHMENT_CONFIG); + stmt.setLong(1, internalMessageId); + stmt.setInt(2, attachmentNum); + rs = stmt.executeQuery(); + if (rs.next()) + return rs.getString(1); + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the attachment config", se); + return null; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + return null; + } + + private static final String SQL_GET_PUBLIC_POSTING_CHANNELS = "SELECT channelId FROM channel WHERE allowPubPost = TRUE"; + /** list of channel ids (Long) that anyone is allowed to post to */ + public List getPublicPostingChannelIds() { + ensureLoggedIn(); + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_PUBLIC_POSTING_CHANNELS); + rs = stmt.executeQuery(); + List rv = new ArrayList(); + while (rs.next()) { + long id = rs.getLong(1); + if (!rs.wasNull()) + rv.add(new Long(id)); + } + return rv; + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the public posting channels", se); + return Collections.EMPTY_LIST; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + + private static final String SQL_GET_BANNED = "SELECT channelHash FROM banned"; + /** list of channels (Hash) that this archive wants nothing to do with */ + public List getBannedChannels() { + ensureLoggedIn(); + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_BANNED); + rs = stmt.executeQuery(); + List rv = new ArrayList(); + while (rs.next()) { + byte chan[] = rs.getBytes(1); + if ( (chan != null) && (chan.length == Hash.HASH_LENGTH) ) + rv.add(new Hash(chan)); + } + return rv; + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the banned channels", se); + return Collections.EMPTY_LIST; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + + /** + * ban the author or channel so that no more posts from that author + * or messages by any author in that channel will be allowed into the + * Syndie archive. If delete is specified, the messages themselves + * will be removed from the archive as well as the database + */ + public void ban(Hash bannedChannel, UI ui, boolean delete) { + ensureLoggedIn(); + addBan(bannedChannel, ui); + if (delete) + executeDelete(bannedChannel, ui); + } + private static final String SQL_BAN = "INSERT INTO banned (channelHash) VALUES (?)"; + private void addBan(Hash bannedChannel, UI ui) { + if (getBannedChannels().contains(bannedChannel)) { + ui.debugMessage("Channel already banned"); + return; + } + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_BAN); + stmt.setBytes(1, bannedChannel.getData()); + int rows = stmt.executeUpdate(); + if (rows != 1) { + throw new SQLException("Ban added " + rows + " rows?"); + } + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error banning the channel", se); + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + + private static final String SQL_UNBAN = "DELETE FROM banned WHERE channelHash = ?"; + public void unban(Hash bannedChannel) { + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_UNBAN); + stmt.setBytes(1, bannedChannel.getData()); + int rows = stmt.executeUpdate(); + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error unbanning the channel", se); + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + + private void executeDelete(Hash bannedChannel, UI ui) { + // delete the banned channel itself from the archive + // then list any messages posted by that author in other channels and + // delete them too + // (implicit index regen?) + List urisToDelete = getURIsToDelete(bannedChannel); + ui.debugMessage("Delete the following URIs: " + urisToDelete); + for (int i = 0; i < urisToDelete.size(); i++) { + SyndieURI uri = (SyndieURI)urisToDelete.get(i); + deleteFromArchive(uri, ui); + deleteFromDB(uri, ui); + } + } + private void deleteFromArchive(SyndieURI uri, UI ui) { + File archiveDir = getArchiveDir(); + File chanDir = new File(archiveDir, uri.getScope().toBase64()); + if (uri.getMessageId() == null) { + // delete the whole channel - all posts, metadata, and even the dir + File f[] = chanDir.listFiles(); + for (int i = 0; i < f.length; i++) { + f[i].delete(); + ui.debugMessage("Deleted channel file " + f[i].getPath()); + } + chanDir.delete(); + ui.debugMessage("Deleted channel dir " + chanDir.getPath()); + ui.statusMessage("Deleted " + (f.length-1) + " messages and the metadata for channel " + uri.getScope().toBase64() + " from the archive"); + } else { + // delete just the given message + File msgFile = new File(chanDir, uri.getMessageId().longValue() + Constants.FILENAME_SUFFIX); + msgFile.delete(); + ui.debugMessage("Deleted message file " + msgFile.getPath()); + ui.statusMessage("Deleted the post " + uri.getScope().toBase64() + " from the archive"); + } + } + private static final String SQL_DELETE_MESSAGE = "DELETE FROM channelMessage WHERE msgId = ?"; + private static final String SQL_DELETE_CHANNEL = "DELETE FROM channel WHERE channelId = ?"; + void deleteFromDB(SyndieURI uri, UI ui) { + if (uri.getMessageId() == null) { + // delete the whole channel, though all of the posts + // will be deleted separately + long scopeId = getChannelId(uri.getScope()); + try { + exec(ImportMeta.SQL_DELETE_TAGS, scopeId); + exec(ImportMeta.SQL_DELETE_POSTKEYS, scopeId); + exec(ImportMeta.SQL_DELETE_MANAGEKEYS, scopeId); + exec(ImportMeta.SQL_DELETE_ARCHIVE_URIS, scopeId); + exec(ImportMeta.SQL_DELETE_ARCHIVES, scopeId); + exec(ImportMeta.SQL_DELETE_CHAN_ARCHIVES, scopeId); + exec(ImportMeta.SQL_DELETE_READ_KEYS, scopeId); + exec(ImportMeta.SQL_DELETE_CHANNEL_META_HEADER, scopeId); + exec(ImportMeta.SQL_DELETE_CHANNEL_REF_URIS, scopeId); + exec(ImportMeta.SQL_DELETE_CHANNEL_REFERENCES, scopeId); + exec(SQL_DELETE_CHANNEL, scopeId); + ui.statusMessage("Deleted the channel " + uri.getScope().toBase64() + " from the database"); + } catch (SQLException se) { + ui.errorMessage("Unable to delete the channel " + uri.getScope().toBase64(), se); + } + } else { + // delete just the given message + long scopeId = getChannelId(uri.getScope()); + long internalId = getMessageId(scopeId, uri.getMessageId().longValue()); + try { + exec(ImportPost.SQL_DELETE_MESSAGE_HIERARCHY, internalId); + exec(ImportPost.SQL_DELETE_MESSAGE_TAGS, internalId); + exec(ImportPost.SQL_DELETE_MESSAGE_ATTACHMENT_DATA, internalId); + exec(ImportPost.SQL_DELETE_MESSAGE_ATTACHMENT_CONFIG, internalId); + exec(ImportPost.SQL_DELETE_MESSAGE_ATTACHMENTS, internalId); + exec(ImportPost.SQL_DELETE_MESSAGE_PAGE_DATA, internalId); + exec(ImportPost.SQL_DELETE_MESSAGE_PAGE_CONFIG, internalId); + exec(ImportPost.SQL_DELETE_MESSAGE_PAGES, internalId); + exec(ImportPost.SQL_DELETE_MESSAGE_REF_URIS, internalId); + exec(ImportPost.SQL_DELETE_MESSAGE_REFS, internalId); + exec(SQL_DELETE_MESSAGE, internalId); + ui.statusMessage("Deleted the post " + uri.getScope().toBase64() + ":" + uri.getMessageId() + " from the database"); + } catch (SQLException se) { + ui.errorMessage("Error deleting the post " + uri, se); + } + } + } + + private static final String SQL_GET_SCOPE_MESSAGES = "SELECT msgId, scopeChannelId, messageId FROM channelMessage WHERE scopeChannelId = ? OR authorChannelId = ? OR targetChannelId = ?"; + private List getURIsToDelete(Hash bannedChannel) { + List urisToDelete = new ArrayList(); + urisToDelete.add(SyndieURI.createScope(bannedChannel)); + long scopeId = getChannelId(bannedChannel); + if (scopeId >= 0) { + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_SCOPE_MESSAGES); + stmt.setLong(1, scopeId); + stmt.setLong(2, scopeId); + stmt.setLong(3, scopeId); + rs = stmt.executeQuery(); + while (rs.next()) { + long msgId = rs.getLong(1); + if (rs.wasNull()) + msgId = -1; + long scopeChanId = rs.getLong(2); + if (rs.wasNull()) + scopeChanId = -1; + long messageId = rs.getLong(3); + if (rs.wasNull()) + messageId = -1; + if ( (messageId >= 0) && (scopeChanId >= 0) ) { + ChannelInfo chanInfo = getChannel(scopeChanId); + if (chanInfo != null) + urisToDelete.add(SyndieURI.createMessage(chanInfo.getChannelHash(), messageId)); + } + } + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the messages to delete", se); + return Collections.EMPTY_LIST; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + return urisToDelete; + } else { + // not known. noop + return urisToDelete; + } + } + + private static final String SQL_GET_NYMPREFS = "SELECT prefName, prefValue FROM nymPref WHERE nymId = ?"; + public Properties getNymPrefs(long nymId) { + ensureLoggedIn(); + Properties rv = new Properties(); + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_NYMPREFS); + stmt.setLong(1, nymId); + rs = stmt.executeQuery(); + while (rs.next()) { + String name = rs.getString(1); + String val = rs.getString(2); + rv.setProperty(name, val); + } + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error getting the nym's preferences", se); + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + return rv; + } + private static final String SQL_SET_NYMPREFS = "INSERT INTO nymPref (nymId, prefName, prefValue) VALUES (?, ?, ?)"; + private static final String SQL_DELETE_NYMPREFS = "DELETE FROM nymPref WHERE nymId = ?"; + public void setNymPrefs(long nymId, Properties prefs) { + ensureLoggedIn(); + PreparedStatement stmt = null; + try { + exec(SQL_DELETE_NYMPREFS, nymId); + stmt = _con.prepareStatement(SQL_SET_NYMPREFS); + for (Iterator iter = prefs.keySet().iterator(); iter.hasNext(); ) { + String name = (String)iter.next(); + String val = prefs.getProperty(name); + stmt.setLong(1, nymId); + stmt.setString(2, name); + stmt.setString(3, val); + stmt.executeUpdate(); + } + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error setting the nym's preferences", se); + } finally { + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + + private void ensureLoggedIn() throws IllegalStateException { + try { + if ( (_con != null) && (!_con.isClosed()) && (_nymId >= 0) ) + return; + } catch (SQLException se) { + // problem detecting isClosed? + } + throw new IllegalStateException("Not logged in"); + } + + public void backup(UI ui, String out, boolean includeArchive) { + String dbFileRoot = getDBFileRoot(); + if (dbFileRoot == null) { + ui.errorMessage("Unable to determine the database file root. Is this a HSQLDB file URL?"); + ui.commandComplete(-1, null); + return; + } + long now = System.currentTimeMillis(); + ui.debugMessage("Backing up the database from " + dbFileRoot + " to " + out); + try { + exec("CHECKPOINT"); + } catch (SQLException se) { + ui.errorMessage("Error halting the database to back it up!", se); + ui.commandComplete(-1, null); + return; + } + try { + ZipOutputStream zos = new ZipOutputStream(new FileOutputStream(out)); + + ZipEntry entry = new ZipEntry("db.properties"); + File f = new File(dbFileRoot + ".properties"); + entry.setSize(f.length()); + entry.setTime(now); + zos.putNextEntry(entry); + copy(f, zos); + zos.closeEntry(); + + entry = new ZipEntry("db.script"); + f = new File(dbFileRoot + ".script"); + entry.setSize(f.length()); + entry.setTime(now); + zos.putNextEntry(entry); + copy(f, zos); + zos.closeEntry(); + + entry = new ZipEntry("db.backup"); + f = new File(dbFileRoot + ".backup"); + entry.setSize(f.length()); + entry.setTime(now); + zos.putNextEntry(entry); + copy(f, zos); + zos.closeEntry(); + + // since we just did a CHECKPOINT, no need to back up the .data file + entry = new ZipEntry("db.data"); + entry.setSize(0); + entry.setTime(now); + zos.putNextEntry(entry); + zos.closeEntry(); + + if (includeArchive) + backupArchive(ui, zos); + + zos.finish(); + zos.close(); + + ui.statusMessage("Database backed up to " + out); + ui.commandComplete(0, null); + } catch (IOException ioe) { + ui.errorMessage("Error backing up the database", ioe); + ui.commandComplete(-1, null); + } + } + + private void backupArchive(UI ui, ZipOutputStream out) throws IOException { + ui.errorMessage("Backing up the archive is not yet supported."); + ui.errorMessage("However, you can just, erm, tar cjvf the $data/archive/ dir"); + } + + private String getDBFileRoot() { return getDBFileRoot(_url); } + private String getDBFileRoot(String url) { + if (url.startsWith("jdbc:hsqldb:file:")) { + String file = url.substring("jdbc:hsqldb:file:".length()); + int end = file.indexOf(";"); + if (end != -1) + file = file.substring(0, end); + return file; + } else { + return null; + } + } + + private void copy(File in, OutputStream out) throws IOException { + byte buf[] = new byte[4096]; + FileInputStream fis = null; + try { + fis = new FileInputStream(in); + int read = -1; + while ( (read = fis.read(buf)) != -1) + out.write(buf, 0, read); + fis.close(); + fis = null; + } finally { + if (fis != null) fis.close(); + } + } + + /** + * @param in zip archive containing db.{properties,script,backup,data} + * to be extracted onto the given db + * @param db JDBC url (but it must be an HSQLDB file URL). If the database + * already exists (and is of a nonzero size), it will NOT be + * overwritten + */ + public void restore(UI ui, String in, String db) { + File inFile = new File(in); + if ( (!inFile.exists()) || (inFile.length() <= 0) ) { + ui.errorMessage("Database backup does not exist: " + inFile.getPath()); + ui.commandComplete(-1, null); + return; + } + + String root = getDBFileRoot(db); + if (root == null) { + ui.errorMessage("Database restoration is only possible with file urls"); + ui.commandComplete(-1, null); + return; + } + File prop = new File(root + ".properties"); + File script = new File(root + ".script"); + File backup = new File(root + ".backup"); + File data = new File(root + ".data"); + if ( (prop.exists() && (prop.length() > 0)) || + (script.exists() && (script.length() > 0)) || + (backup.exists() && (backup.length() > 0)) || + (data.exists() && (data.length() > 0)) ) { + ui.errorMessage("Not overwriting existing non-empty database files: "); + ui.errorMessage(prop.getPath()); + ui.errorMessage(script.getPath()); + ui.errorMessage(backup.getPath()); + ui.errorMessage(data.getPath()); + ui.errorMessage("If they are corrupt or you really want to replace them,"); + ui.errorMessage("delete them first, then rerun the restore command"); + ui.commandComplete(-1, null); + return; + } + + String url = _url; + String login = _login; + String pass = _pass; + long nymId = _nymId; + + if (_con != null) { + ui.statusMessage("Disconnecting from the database to restore..."); + close(); + } + + ui.statusMessage("Restoring the database from " + in + " to " + root); + + try { + ZipInputStream zis = new ZipInputStream(new FileInputStream(in)); + + while (true) { + ZipEntry entry = zis.getNextEntry(); + if (entry == null) + break; + String name = entry.getName(); + if ("db.properties".equals(name)) { + copy(zis, prop); + } else if ("db.script".equals(name)) { + copy(zis, script); + } else if ("db.backup".equals(name)) { + copy(zis, backup); + } else if ("db.data".equals(name)) { + copy(zis, data); + } else { + ui.debugMessage("Ignoring backed up file " + name + " for now"); + } + } + + zis.close(); + + ui.statusMessage("Database restored from " + in); + + if ( (url != null) && (login != null) && (pass != null) ) { + ui.statusMessage("Reconnecting to the database"); + try { + connect(url, login, pass); + } catch (SQLException se) { + ui.errorMessage("Not able to log back into the database", se); + } + } + ui.commandComplete(0, null); + } catch (IOException ioe) { + ui.errorMessage("Error backing up the database", ioe); + ui.commandComplete(-1, null); + } + } + + private void copy(InputStream in, File out) throws IOException { + byte buf[] = new byte[4096]; + FileOutputStream fos = null; + try { + fos = new FileOutputStream(out); + int read = -1; + while ( (read = in.read(buf)) != -1) + fos.write(buf, 0, read); + fos.close(); + fos = null; + } finally { + if (fos != null) fos.close(); + } + } + + private static final String SQL_GET_ALIASES = "SELECT aliasName, aliasValue FROM nymCommandAlias WHERE nymId = ? ORDER BY aliasName ASC"; + /** map of command name (String) to command line (String) */ + public Map getAliases(long nymId) { + TreeMap rv = new TreeMap(); + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _con.prepareStatement(SQL_GET_ALIASES); + stmt.setLong(1, nymId); + rs = stmt.executeQuery(); + while (rs.next()) { + String name = (String)rs.getString(1); + String value = rs.getString(2); + if ( (name != null) && (value != null) && (name.length() > 0) ) + rv.put(name, value); + } + rs.close(); + rs = null; + stmt.close(); + stmt = null; + } catch (SQLException se) { + if (_log.shouldLog(Log.WARN)) + _log.warn("Error fetching aliases", se); + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + return rv; + } + + private static final String SQL_DELETE_ALIAS = "DELETE FROM nymCommandAlias WHERE nymId = ? AND aliasName = ?"; + private static final String SQL_ADD_ALIAS = "INSERT INTO nymCommandAlias (nymId, aliasName, aliasValue) VALUES (?, ?, ?)"; + public void addAlias(long nymId, String name, String value) { + PreparedStatement stmt = null; + try { + stmt = _con.prepareStatement(SQL_DELETE_ALIAS); + stmt.setLong(1, nymId); + stmt.setString(2, name); + stmt.executeUpdate(); + stmt.close(); + + if ( (value != null) && (value.length() > 0) ) { + stmt = _con.prepareStatement(SQL_ADD_ALIAS); + stmt.setLong(1, nymId); + stmt.setString(2, name); + stmt.setString(3, value); + stmt.executeUpdate(); + stmt.close(); + } + stmt = null; + } catch (SQLException se) { + if (_log.shouldLog(Log.WARN)) + _log.warn("Error updating alias", se); + } finally { + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } +} diff --git a/src/syndie/db/HTTPSyndicator.java b/src/syndie/db/HTTPSyndicator.java new file mode 100644 index 0000000..208dd37 --- /dev/null +++ b/src/syndie/db/HTTPSyndicator.java @@ -0,0 +1,527 @@ +/* + * HTTPSyndicator.java + * + * Created on September 19, 2006, 12:41 PM + * + * To change this template, choose Tools | Template Manager + * and open the template in the editor. + */ + +package syndie.db; + +import java.io.*; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import net.i2p.data.Base64; +import net.i2p.data.Hash; +import net.i2p.util.EepGet; +import net.i2p.util.EepGetScheduler; +import net.i2p.util.EepPost; +import syndie.Constants; +import syndie.data.*; + +/** + * request those files from the archive, saving them to client.getTempDir() + * iterate across those files, attempting to import each one + * if it fails due to PBE, add it to the pbefail list + * if it fails for other reasons, add it to the unimported list (and delete the file?) + * if it succeeds, delete the file + * display the summary of the import process + */ +public class HTTPSyndicator { + private String _archiveURL; + private String _proxyHost; + private int _proxyPort; + private List _syndieURIs; + private DBClient _client; + private UI _ui; + + private List _fetchedFiles; + private List _fetchedURIs; + private List _pendingPBEFiles; + private List _pendingPBEURIs; + private List _pendingPBEPrompts; + + private List _postURIs; + private String _postURLOverride; + private String _postPassphrase; + private boolean _postShouldDeleteOutbound; + private ArchiveIndex _remoteIndex; + private List _postToDelete; + + public HTTPSyndicator(String archiveURL, String proxyHost, int proxyPort, DBClient client, UI ui, ArchiveIndex index) { + _archiveURL = archiveURL; + _proxyHost = proxyHost; + _proxyPort = proxyPort; + _client = client; + _ui = ui; + _remoteIndex = index; + + _fetchedFiles = new ArrayList(); + _fetchedURIs = new ArrayList(); + _pendingPBEFiles = new ArrayList(); + _pendingPBEURIs = new ArrayList(); + _pendingPBEPrompts = new ArrayList(); + _postToDelete = new ArrayList(); + + _postURIs = new ArrayList(); + _postShouldDeleteOutbound = false; + _postURLOverride = null; + _postPassphrase = null; + } + + /** + * fetch the posts/replies/metadata from the archive, saving them to disk + * but not attempting to import them yet + */ + public boolean fetch(List syndieURIs) { + _syndieURIs = syndieURIs; + if (_archiveURL.startsWith("https")) { + fetchSSL(); + } else if (_archiveURL.startsWith("http")) { + fetchHTTP(); + } else { + fetchFiles(); + } + return true; + } + + private void fetchSSL() { + // URL fetch + _ui.errorMessage("SSL not yet supported"); + } + private void fetchHTTP() { + // eepget-driven, one at a time via EepGetScheduler + if (!_archiveURL.endsWith("/")) + _archiveURL = _archiveURL + "/"; + List urls = new ArrayList(); + List files = new ArrayList(); + Map httpURLToSyndieURI = new HashMap(); + + File tmpDir = _client.getTempDir(); + int msgDirIndex = 0; + File msgDir = new File(tmpDir, "httpsync"+msgDirIndex); + while (msgDir.exists()) { + msgDirIndex++; + msgDir = new File(tmpDir, "httpsync"+msgDirIndex); + } + msgDir.mkdirs(); + + for (int i = 0; i < _syndieURIs.size(); i++) { + SyndieURI uri = (SyndieURI)_syndieURIs.get(i); + String url = null; + if (uri.getMessageId() == null) + url = _archiveURL + uri.getScope().toBase64() + "/meta" + Constants.FILENAME_SUFFIX; + else + url = _archiveURL + uri.getScope().toBase64() + "/" + uri.getMessageId().longValue() + Constants.FILENAME_SUFFIX; + + File tmpFile = new File(msgDir, i + Constants.FILENAME_SUFFIX); + httpURLToSyndieURI.put(url, uri); + urls.add(url); + files.add(tmpFile); + } + + HTTPStatusListener lsnr = new HTTPStatusListener(httpURLToSyndieURI); + EepGetScheduler sched = new EepGetScheduler(_client.ctx(), urls, files, _proxyHost, _proxyPort, lsnr); + sched.fetch(true); // blocks until complete + _ui.statusMessage("Fetch of selected URIs complete"); + //while (lsnr.transfersPending()) { + // try { Thread.sleep(1000); } catch (InterruptedException ie) {} + //} + } + + private class HTTPStatusListener implements EepGet.StatusListener { + private Map _httpURLToSyndieURI; + public HTTPStatusListener(Map httpURLToSyndieURI) { + _httpURLToSyndieURI = httpURLToSyndieURI; + } + public void bytesTransferred(long alreadyTransferred, int currentWrite, long bytesTransferred, long bytesRemaining, String url) { + _ui.debugMessage("Transferred: " + bytesTransferred); + } + public void transferComplete(long alreadyTransferred, long bytesTransferred, long bytesRemaining, String url, String outputFile, boolean notModified) { + _ui.debugMessage("Transfer complete: " + bytesTransferred + " for " + url); + _fetchedFiles.add(new File(outputFile)); + _fetchedURIs.add(_httpURLToSyndieURI.remove(url)); + } + public void attemptFailed(String url, long bytesTransferred, long bytesRemaining, int currentAttempt, int numRetries, Exception cause) { + _ui.debugMessage("Transfer attempt failed: " + bytesTransferred + " from " + url, cause); + } + public void transferFailed(String url, long bytesTransferred, long bytesRemaining, int currentAttempt) { + _ui.statusMessage("Transfer totally failed of " + url); + _httpURLToSyndieURI.remove(url); + } + public void headerReceived(String url, int currentAttempt, String key, String val) { + _ui.debugMessage("Header received: " + key + "=" + val); + } + public void attempting(String url) { + _ui.statusMessage("Fetching " + url + "..."); + } + public boolean transfersPending() { return _httpURLToSyndieURI.size() > 0; } + } + + private void fetchFiles() { + File tmpDir = _client.getTempDir(); + int msgDirIndex = 0; + File msgDir = new File(tmpDir, "httpsync"+msgDirIndex); + while (msgDir.exists()) { + msgDirIndex++; + msgDir = new File(tmpDir, "httpsync"+msgDirIndex); + } + msgDir.mkdirs(); + int curFile = 0; + File archiveDir = new File(_archiveURL); + _ui.debugMessage("Fetching " + _syndieURIs); + for (int i = 0; i < _syndieURIs.size(); i++) { + SyndieURI uri = (SyndieURI)_syndieURIs.get(i); + Hash scope = uri.getScope(); + if (scope == null) { + _ui.errorMessage("Invalid fetch URI - has no scope: " + uri); + continue; + } + + File srcDir = new File(archiveDir, scope.toBase64()); + File srcFile = null; + Long msgId = uri.getMessageId(); + if (msgId == null) + srcFile = new File(srcDir, "meta" + Constants.FILENAME_SUFFIX); + else + srcFile = new File(srcDir, msgId.longValue() + Constants.FILENAME_SUFFIX); + if (srcFile.exists()) { + _ui.debugMessage("Fetching file from " + srcFile.getPath() + ": " + uri); + File dest = new File(msgDir, curFile + Constants.FILENAME_SUFFIX); + boolean ok = copy(srcFile, dest); + if (!ok) { + dest.delete(); + _ui.debugMessage(uri + " could not be fetched from " + srcFile.getPath()); + return; + } else { + _fetchedFiles.add(dest); + _fetchedURIs.add(uri); + _ui.debugMessage("URI fetched: " + uri); + } + curFile++; + } else { + _ui.errorMessage("Referenced URI is not in the archive: " + uri + " as " + srcFile.getPath()); + } + } + } + + private boolean copy(File src, File dest) { + FileInputStream fis = null; + FileOutputStream fos = null; + try { + fis = new FileInputStream(src); + fos = new FileOutputStream(dest); + byte buf[] = new byte[4096]; + int read = 0; + while ( (read = fis.read(buf)) != -1) + fos.write(buf, 0, read); + fis.close(); + fos.close(); + fis = null; + fos = null; + return true; + } catch (IOException ioe) { + _ui.errorMessage("Error copying the file " + src.getPath() + " to " + dest.getPath()); + return false; + } finally { + if (fis != null) try { fis.close(); } catch (IOException ioe) {} + if (fos != null) try { fos.close(); } catch (IOException ioe) {} + } + } + + public int importFetched() { + int imported = 0; + _ui.debugMessage("Attempting to import " + _fetchedFiles.size() + " messages"); + for (int i = 0; i < _fetchedFiles.size(); i++) { + Importer imp = new Importer(_client, _client.getPass()); + File f = (File)_fetchedFiles.get(i); + SyndieURI uri = (SyndieURI)_fetchedURIs.get(i); + _ui.debugMessage("Importing " + uri + " from " + f.getPath()); + boolean ok; + try { + NestedUI nested = new NestedUI(_ui); + ok = imp.processMessage(nested, new FileInputStream(f), _client.getLoggedInNymId(), _client.getPass(), null); + if (ok && (nested.getExitCode() >= 0)) { + _ui.debugMessage("Import successful for " + uri); + f.delete(); + imported++; + } else { + _ui.debugMessage("Could not import " + f.getPath()); + importFailed(uri, f); + } + } catch (IOException ioe) { + _ui.errorMessage("Error importing the message for " + uri, ioe); + } + } + return imported; + } + private void importFailed(SyndieURI uri, File localCopy) throws IOException { + Enclosure enc = new Enclosure(new FileInputStream(localCopy)); + String prompt = enc.getHeaderString(Constants.MSG_HEADER_PBE_PROMPT); + if (prompt != null) { + _pendingPBEFiles.add(localCopy); + _pendingPBEURIs.add(uri); + _pendingPBEPrompts.add(prompt); + } else { + // record why the import failed in the db (missing readKey, missing replyKey, corrupt, unauthorized, etc) + } + } + public int countMissingPassphrases() { return _pendingPBEPrompts.size(); } + public String getMissingPrompt(int index) { return (String)_pendingPBEPrompts.get(index); } + public SyndieURI getMissingURI(int index) { return (SyndieURI)_pendingPBEURIs.get(index); } + public void importPBE(int index, String passphrase) { + Importer imp = new Importer(_client, _client.getPass()); + File f = (File)_pendingPBEFiles.get(index); + SyndieURI uri = (SyndieURI)_pendingPBEURIs.get(index); + boolean ok; + try { + NestedUI nested = new NestedUI(_ui); + ok = imp.processMessage(nested, new FileInputStream(f), _client.getLoggedInNymId(), _client.getPass(), passphrase); + if (ok && (nested.getExitCode() >= 0) && (nested.getExitCode() != 1) ) { + f.delete(); + _pendingPBEFiles.remove(index); + _pendingPBEPrompts.remove(index); + _pendingPBEURIs.remove(index); + _ui.statusMessage("Passphrase correct. Message imported: " + uri); + _ui.commandComplete(0, null); + } else { + _ui.errorMessage("Passphrase incorrect"); + _ui.commandComplete(-1, null); + } + } catch (IOException ioe) { + _ui.errorMessage("Error importing the message with a passphrase for " + uri, ioe); + _ui.commandComplete(-1, null); + } + } + + public void post() { + if (_postURIs.size() <= 0) { + _ui.statusMessage("No messages to post"); + _ui.commandComplete(0, null); + } else if (_archiveURL.startsWith("https")) { + postSSL(); + } else if (_archiveURL.startsWith("http")) { + postHTTP(); + } else { + _ui.errorMessage("Only know how to post to HTTP or HTTPS"); + _ui.commandComplete(-1, null); + } + } + public void setPostURLOverride(String url) { _postURLOverride = url; } + public void setDeleteOutboundAfterSend(boolean shouldDelete) { _postShouldDeleteOutbound = shouldDelete; } + public void setPostPassphrase(String passphrase) { _postPassphrase = passphrase; } + + private void postSSL() { + _ui.errorMessage("Only know how to post to HTTP"); + _ui.commandComplete(-1, null); + } + + private void postHTTP() { + Map fields = new HashMap(); + int numMeta = 0; + int numPosts = 0; + for (int i = 0; i < _postURIs.size(); i++) { + SyndieURI uri = (SyndieURI)_postURIs.get(i); + File chanDir = new File(_client.getArchiveDir(), uri.getScope().toBase64()); + File f = null; + String name = null; + if (uri.getMessageId() == null) { + name = "meta" + numMeta; + f = new File(chanDir, "meta" + Constants.FILENAME_SUFFIX); + numMeta++; + } else { + name = "post" + numPosts; + f = new File(chanDir, uri.getMessageId().longValue() + Constants.FILENAME_SUFFIX); + numPosts++; + } + fields.put(name, f); + _ui.debugMessage("Posting " + f.getPath() + " as " + name); + } + _ui.statusMessage("Posting " + numMeta + " metadata messages and " + numPosts + " posts"); + + if (_postPassphrase != null) + fields.put("pass", _postPassphrase); + + EepPost post = new EepPost(_client.ctx()); + String url = null; + if (_postURLOverride == null) { + if (_archiveURL.endsWith("/")) + url = _archiveURL + "import.cgi"; + else + url = _archiveURL + "/import.cgi"; + } else { + url = _postURLOverride; + } + + Blocker onCompletion = new Blocker(); + _ui.debugMessage("About to post messages to " + url); + post.postFiles(url, _proxyHost, _proxyPort, fields, onCompletion); + while (onCompletion.notYetComplete()) { + _ui.debugMessage("Post in progress..."); + try { + synchronized (onCompletion) { + onCompletion.wait(1000); + } + } catch (InterruptedException ie) {} + } + _ui.statusMessage("Files posted"); + if (_postShouldDeleteOutbound) { + for (int i = 0; i < _postToDelete.size(); i++) { + File f = (File)_postToDelete.get(i); + _ui.statusMessage("Removing " + f.getPath() + " from the outbound queue"); + f.delete(); + File parent = f.getParentFile(); + String siblings[] = parent.list(); + if ( (siblings == null) || (siblings.length == 0) ) { + parent.delete(); + _ui.debugMessage("Removing empty queue dir " + parent.getPath()); + } + } + } + _ui.commandComplete(0, null); + } + private class Blocker implements Runnable { + private boolean _complete; + public Blocker() { _complete = false; } + public void run() { + _complete = true; + synchronized (Blocker.this) { + Blocker.this.notifyAll(); + } + } + public boolean notYetComplete() { return !_complete; } + } + + /** + * Schedule a number of URIs to be sent to the remote archive. The + * style has four valid values: + * outbound: send all posts and metadata in the local outbound queue + * outboundmeta: send all of the metadata in the local outbound queue + * archive: send all posts and metadata in the local archive or outbound queue + * archivemeta: send all of the metadata in the local archive or outbound queue + * + * @param knownChanOnly if true, only send posts or metadata where the remote archive knows about the channel + */ + public void schedulePut(String style, boolean knownChanOnly) { + _ui.debugMessage("Scheduling put of " + style); + if ("outbound".equalsIgnoreCase(style)) { + scheduleOutbound(knownChanOnly); + } else if ("outboundmeta".equalsIgnoreCase(style)) { + scheduleOutboundMeta(knownChanOnly); + } else if ("archive".equalsIgnoreCase(style)) { + scheduleArchive(knownChanOnly); + } else if ("archivemeta".equalsIgnoreCase(style)) { + scheduleArchiveMeta(knownChanOnly); + } else { + _ui.errorMessage("Schedule style is unsupported. Valid values are 'outbound', 'outboundmeta', 'archive', and 'archivemeta'"); + _ui.commandComplete(-1, null); + } + } + + private void scheduleOutbound(boolean knownChanOnly) { schedule(_client.getOutboundDir(), false, true, knownChanOnly); } + private void schedule(File rootDir, boolean metaOnly, boolean isOutbound, boolean knownChanOnly) { + int numMeta = 0; + int numPost = 0; + long numBytes = 0; + File chanDirs[] = rootDir.listFiles(); + _ui.debugMessage("Number of potential channel dirs: " + chanDirs.length + " in " + rootDir.getPath()); + for (int i = 0; i < chanDirs.length; i++) { + if (!chanDirs[i].isDirectory()) + continue; + File chanMessages[] = chanDirs[i].listFiles(); + byte chanHash[] = Base64.decode(chanDirs[i].getName()); + if ( (chanHash == null) || (chanHash.length != Hash.HASH_LENGTH) ) { + _ui.debugMessage("Not scheduling the channel dir " + chanDirs[i].getName()); + continue; + } + Hash chan = new Hash(chanHash); + ArchiveChannel remote = _remoteIndex.getChannel(SyndieURI.createScope(chan)); + if (knownChanOnly && (remote == null)) { + _ui.debugMessage("Not scheduling the channel, since it isn't known remotely and we only send known"); + continue; + } + for (int j = 0; j < chanMessages.length; j++) { + String name = chanMessages[j].getName(); + boolean isMeta = false; + SyndieURI uri = null; + if (("meta" + Constants.FILENAME_SUFFIX).equalsIgnoreCase(name)) { + isMeta = true; + uri = SyndieURI.createScope(chan); + } else if (name.endsWith(Constants.FILENAME_SUFFIX) && (name.length() > Constants.FILENAME_SUFFIX.length())) { + if (!metaOnly) { + try { + String msgIdStr = name.substring(0, name.length()-Constants.FILENAME_SUFFIX.length()); + Long msgId = Long.valueOf(msgIdStr); + uri = SyndieURI.createMessage(chan, msgId.longValue()); + } catch (NumberFormatException nfe) { + // skip + } + } else { + _ui.debugMessage("Not scheduling the post, since we are only sending metadata"); + } + } + boolean scheduled = false; + if (uri != null) { + if (uri.getMessageId() != null) { + if (null == _remoteIndex.getMessage(uri)) { + if (!_postURIs.contains(uri)) { + _postURIs.add(uri); + if (isMeta) + numMeta++; + else + numPost++; + numBytes += chanMessages[j].length(); + } + scheduled = true; + } else { + _ui.debugMessage("Not scheduling the post, since the remote site has it"); + } + } else { + // check the version in the db + if (remote == null) { + if (!_postURIs.contains(uri)) { + _postURIs.add(uri); + if (isMeta) + numMeta++; + else + numPost++; + numBytes += chanMessages[j].length(); + } + scheduled = true; + } else { + long chanId = _client.getChannelId(uri.getScope()); + ChannelInfo info = _client.getChannel(chanId); + if (info.getEdition() > remote.getVersion()) { + if (!_postURIs.contains(uri)) { + _postURIs.add(uri); + if (isMeta) + numMeta++; + else + numPost++; + numBytes += chanMessages[j].length(); + } + } else { + _ui.debugMessage("Not scheduling the metadata, since the remote site has that or a newer version"); + } + } + } + } + if (scheduled && isOutbound && _postShouldDeleteOutbound) { + _ui.debugMessage("Scheduling " + chanMessages[j].getName() + " for deletion after post"); + _postToDelete.add(chanMessages[j]); + } else { + _ui.debugMessage("Not scheduling " + chanMessages[j].getName() + " for deletion after post (sched? " + scheduled + " out? " + isOutbound + " del? " + _postShouldDeleteOutbound + ")"); + } + } + } + _ui.debugMessage("Scheduling post of " + _postURIs); + _ui.statusMessage("Scheduled upload of " + numPost + " posts and " + numMeta + " channel metadata messages"); + _ui.statusMessage("Total size to be uploaded: " + ((numBytes+1023)/1024) + " kilobytes"); + } + private void scheduleOutboundMeta(boolean knownChanOnly) { schedule(_client.getOutboundDir(), true, true, knownChanOnly); } + private void scheduleArchive(boolean knownChanOnly) { schedule(_client.getArchiveDir(), false, false, knownChanOnly); } + private void scheduleArchiveMeta(boolean knownChanOnly) { schedule(_client.getArchiveDir(), true, false, knownChanOnly); } +} diff --git a/src/syndie/db/ImportMeta.java b/src/syndie/db/ImportMeta.java new file mode 100644 index 0000000..839db0c --- /dev/null +++ b/src/syndie/db/ImportMeta.java @@ -0,0 +1,768 @@ +package syndie.db; + +import java.io.File; +import java.io.FileOutputStream; +import java.io.IOException; +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Types; +import java.util.Iterator; +import java.util.List; +import java.util.Properties; +import net.i2p.data.Base64; +import net.i2p.data.DataFormatException; +import net.i2p.data.DataHelper; +import net.i2p.data.Hash; +import net.i2p.data.PublicKey; +import net.i2p.data.SessionKey; +import net.i2p.data.SigningPublicKey; +import syndie.Constants; +import syndie.data.Enclosure; +import syndie.data.EnclosureBody; +import syndie.data.ReferenceNode; +import syndie.data.SyndieURI; + +/** + * + */ +class ImportMeta { + /** + * The signature has been validated, so now import what we can + */ + public static boolean process(DBClient client, UI ui, Enclosure enc, long nymId, String nymPassphrase, String bodyPassphrase) { + EnclosureBody body = null; + SigningPublicKey ident = enc.getHeaderSigningKey(Constants.MSG_META_HEADER_IDENTITY); + Hash identHash = ident.calculateHash(); + if (client.getBannedChannels().contains(identHash)) { + ui.errorMessage("Not importing banned metadata for " + identHash.toBase64()); + ui.commandComplete(-1, null); + return false; + } + SessionKey key = enc.getHeaderSessionKey(Constants.MSG_HEADER_BODYKEY); + if (key != null) { + try { + // decrypt it with that key + body = new EnclosureBody(client.ctx(), enc.getData(), enc.getDataSize(), key); + } catch (DataFormatException dfe) { + ui.errorMessage("Error processing with the body key (" + Base64.encode(key.getData()) + " len=" + key.getData().length + ")", dfe); + ui.commandComplete(-1, null); + return false; + } catch (IOException ioe) { + ui.errorMessage("Error processing with the body key", ioe); + ui.commandComplete(-1, null); + return false; + } + } else { + String prompt = enc.getHeaderString(Constants.MSG_HEADER_PBE_PROMPT); + byte promptSalt[] = enc.getHeaderBytes(Constants.MSG_HEADER_PBE_PROMPT_SALT); + if ( (prompt != null) && (promptSalt != null) && (promptSalt.length != 0) ) { + String passphrase = bodyPassphrase; + if (passphrase == null) { + ui.errorMessage("Passphrase required to extract this message"); + ui.errorMessage("Please use --passphrase 'passphrase value', where the passphrase value is the answer to:"); + ui.errorMessage(CommandImpl.strip(prompt)); + body = new UnreadableEnclosureBody(client.ctx()); + } else { + key = client.ctx().keyGenerator().generateSessionKey(promptSalt, DataHelper.getUTF8(passphrase)); + try { + // decrypt it with that key + body = new EnclosureBody(client.ctx(), enc.getData(), enc.getDataSize(), key); + } catch (DataFormatException dfe) { + ui.errorMessage("Invalid passphrase", dfe); + body = new UnreadableEnclosureBody(client.ctx()); + } catch (IOException ioe) { + ui.debugMessage("Invalid passphrase", ioe); + body = new UnreadableEnclosureBody(client.ctx()); + } + } + } else { + List keys = client.getReadKeys(identHash, nymId, nymPassphrase); + for (int i = 0; keys != null && i < keys.size(); i++) { + // try decrypting with that key + try { + body = new EnclosureBody(client.ctx(), enc.getData(), enc.getDataSize(), (SessionKey)keys.get(i)); + break; + } catch (IOException ioe) { + ui.debugMessage("Error processing with a read key", ioe); + continue; + } catch (DataFormatException dfe) { + ui.debugMessage("Error processing with a read key", dfe); + continue; + } + } + if (body == null) { + ui.errorMessage("No read keys were successful at decrypting the message"); + body = new UnreadableEnclosureBody(client.ctx()); + } + } + } + + ui.debugMessage("enclosure: " + enc + "\nbody: " + body); + boolean ok = importMeta(client, ui, nymId, nymPassphrase, enc, body); + if (ok) { + if (body instanceof UnreadableEnclosureBody) + ui.commandComplete(1, null); + else + ui.commandComplete(0, null); + } else { + ui.commandComplete(-1, null); + } + return ok; + + } + + /** + * interpret the bits in the enclosure body and headers, importing them + * into the db + */ + private static boolean importMeta(DBClient client, UI ui, long nymId, String passphrase, Enclosure enc, EnclosureBody body) { + SigningPublicKey identKey = enc.getHeaderSigningKey(Constants.MSG_META_HEADER_IDENTITY); + Hash ident = identKey.calculateHash(); + Long edition = enc.getHeaderLong(Constants.MSG_META_HEADER_EDITION); + if ( (edition == null) || (edition.longValue() < 0) ) + edition = new Long(0); + // see if we have the info already (with the same or later edition), + // since if we do, there's nothing to import. + long knownEdition = client.getKnownEdition(ident); + if (knownEdition > edition.longValue()) { + ui.statusMessage("already known edition " + knownEdition); + return true; + } + + // if we don't... + Connection con = client.con(); + boolean wasAuto = false; + try { + wasAuto = con.getAutoCommit(); + con.commit(); + con.setAutoCommit(false); + long channelId = -1; + if (knownEdition < 0) // brand new + channelId = insertIntoChannel(client, ui, nymId, passphrase, enc, body, identKey, ident, edition.longValue()); + else + channelId = updateChannel(client, ui, nymId, passphrase, enc, body, ident, edition.longValue()); + if (channelId < 0) { return false; } + // clear out & insert into channelTag + setTags(client, ui, channelId, enc, body); + // clear out & insert into channelPostKey + setPostKeys(client, ui, channelId, enc, body); + // clear out & insert into channelManageKey + setManageKeys(client, ui, channelId, enc, body); + // clear out (recursively) and insert into channelArchive + setChannelArchives(client, ui, channelId, enc, body); + // insert into channelReadKey + setChannelReadKeys(client, channelId, enc, body); + // insert into channelMetaHeader + setChannelMetaHeaders(client, channelId, enc, body); + // insert into channelReferenceGroup + setChannelReferences(client, channelId, body); + // (plus lots of 'insert into uriAttribute' interspersed) + con.commit(); + ui.statusMessage("committed as channel " + channelId); + + saveToArchive(client, ui, ident, enc); + return true; + } catch (SQLException se) { + ui.errorMessage("Error importing", se); + try { + con.rollback(); + } catch (SQLException ex) { + ui.errorMessage("Unable to rollback on error", ex); + } + return false; + } finally { + try { + con.setAutoCommit(wasAuto); + } catch (SQLException ex) { + // ignore + } + } + } + + /* + * CREATE CACHED TABLE channel ( + * -- locally unique id + * channelId BIGINT IDENTITY PRIMARY KEY + * , channelHash VARBINARY(32) + * , identKey VARBINARY(256) + * , encryptKey VARBINARY(256) + * , edition BIGINT + * , name VARCHAR(256) + * , description VARCHAR(1024) + * -- can unauthorized people post new topics? + * , allowPubPost BOOLEAN + * -- can unauthorized people reply to existing topics? + * , allowPubReply BOOLEAN + * , UNIQUE (channelHash) + * ); + */ + private static final String SQL_INSERT_CHANNEL = "INSERT INTO channel (channelId, channelHash, identKey, encryptKey, edition, name, description, allowPubPost, allowPubReply, importDate, readKeyMissing, pbePrompt) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, NOW(), ?, ?)"; + private static long insertIntoChannel(DBClient client, UI ui, long nymId, String passphrase, Enclosure enc, + EnclosureBody body, SigningPublicKey identKey, Hash ident, + long edition) throws SQLException { + PublicKey encryptKey = body.getHeaderEncryptKey(Constants.MSG_META_HEADER_ENCRYPTKEY); + if (encryptKey == null) + encryptKey = enc.getHeaderEncryptKey(Constants.MSG_META_HEADER_ENCRYPTKEY); + + String name = body.getHeaderString(Constants.MSG_META_HEADER_NAME); + if (name == null) + name = enc.getHeaderString(Constants.MSG_META_HEADER_NAME); + + String desc = body.getHeaderString(Constants.MSG_META_HEADER_DESCRIPTION); + if (desc == null) + desc = enc.getHeaderString(Constants.MSG_META_HEADER_DESCRIPTION); + + Boolean pubPosting = body.getHeaderBoolean(Constants.MSG_META_HEADER_PUBLICPOSTING); + if (pubPosting == null) + pubPosting = enc.getHeaderBoolean(Constants.MSG_META_HEADER_PUBLICPOSTING); + if (pubPosting == null) + pubPosting = Constants.DEFAULT_ALLOW_PUBLIC_POSTS; + + Boolean pubReply = body.getHeaderBoolean(Constants.MSG_META_HEADER_PUBLICREPLY); + if (pubReply == null) + pubReply = enc.getHeaderBoolean(Constants.MSG_META_HEADER_PUBLICREPLY); + if (pubReply == null) + pubReply = Constants.DEFAULT_ALLOW_PUBLIC_REPLIES; + + long channelId = client.nextId("channelIdSequence"); + if (channelId < 0) { + ui.errorMessage("Internal error with the database (GCJ/HSQLDB problem with sequences?)"); + return -1; + } + + Connection con = client.con(); + PreparedStatement stmt = null; + try { + stmt = con.prepareStatement(SQL_INSERT_CHANNEL); + //"INSERT INTO channel (channelId, channelHash, identKey, encryptKey, edition, name, + // description, allowPubPost, allowPubReply, readKeyMissing, pbePrompt) + stmt.setLong(1, channelId); + stmt.setBytes(2, ident.getData()); + stmt.setBytes(3, identKey.getData()); + if (encryptKey != null) + stmt.setBytes(4, encryptKey.getData()); + else + stmt.setNull(4, Types.VARBINARY); + stmt.setLong(5, edition); + if (name != null) + stmt.setString(6, name); + else + stmt.setNull(6, Types.VARCHAR); + if (desc != null) + stmt.setString(7, desc); + else + stmt.setNull(7, Types.VARCHAR); + stmt.setBoolean(8, pubPosting.booleanValue()); + stmt.setBoolean(9, pubReply.booleanValue()); + + boolean readKeyMissing = false; + String pbePrompt = null; + + // the metadata was authorized, but we couldn't decrypt the body. + // that can happen if we either don't have the passphrase or if we + // don't know the appropriate channel read key. + if (body instanceof UnreadableEnclosureBody) { + pbePrompt = enc.getHeaderString(Constants.MSG_HEADER_PBE_PROMPT); + if (pbePrompt == null) + readKeyMissing = true; + } + + stmt.setBoolean(10, readKeyMissing); + if (pbePrompt != null) + stmt.setString(11, pbePrompt); + else + stmt.setNull(11, Types.VARCHAR); + + int rows = stmt.executeUpdate(); + if (rows != 1) + throw new SQLException("Unable to insert the new channel"); + return channelId; + } finally { + if (stmt != null) stmt.close(); + } + } + + /* + * CREATE CACHED TABLE channel ( + * -- locally unique id + * channelId BIGINT IDENTITY PRIMARY KEY + * , channelHash VARBINARY(32) + * , identKey VARBINARY(256) + * , encryptKey VARBINARY(256) + * , edition BIGINT + * , name VARCHAR(256) + * , description VARCHAR(1024) + * -- can unauthorized people post new topics? + * , allowPubPost BOOLEAN + * -- can unauthorized people reply to existing topics? + * , allowPubReply BOOLEAN + * , UNIQUE (channelHash) + * ); + */ + private static final String SQL_GET_CHANNEL_ID = "SELECT channelId FROM channel WHERE channelHash = ?"; + private static long getChannelId(DBClient client, UI ui, Hash identHash) throws SQLException { + Connection con = client.con(); + PreparedStatement stmt = null; + try { + stmt = con.prepareStatement(SQL_GET_CHANNEL_ID); + stmt.setBytes(1, identHash.getData()); + ResultSet rs = stmt.executeQuery(); + if (rs.next()) { + long val = rs.getLong(1); + if (!rs.wasNull()) + return val; + } + return -1; + } finally { + if (stmt != null) stmt.close(); + } + } + private static final String SQL_UPDATE_CHANNEL = "UPDATE channel SET encryptKey = ?, edition = ?, name = ?, description = ?, allowPubPost = ?, allowPubReply = ?, readKeyMissing = ?, pbePrompt = ?, importDate = NOW() WHERE channelId = ?"; + private static long updateChannel(DBClient client, UI ui, long nymId, String passphrase, Enclosure enc, + EnclosureBody body, Hash ident, long edition) throws SQLException { + long channelId = getChannelId(client, ui, ident); + if (channelId < 0) throw new SQLException("Cannot update, as there is no existing channel for " + ident.toBase64()); + + PublicKey encryptKey = body.getHeaderEncryptKey(Constants.MSG_META_HEADER_ENCRYPTKEY); + if (encryptKey == null) + encryptKey = enc.getHeaderEncryptKey(Constants.MSG_META_HEADER_ENCRYPTKEY); + + String name = body.getHeaderString(Constants.MSG_META_HEADER_NAME); + if (name == null) + name = enc.getHeaderString(Constants.MSG_META_HEADER_NAME); + + String desc = body.getHeaderString(Constants.MSG_META_HEADER_DESCRIPTION); + if (desc == null) + desc = enc.getHeaderString(Constants.MSG_META_HEADER_DESCRIPTION); + + Boolean pubPosting = body.getHeaderBoolean(Constants.MSG_META_HEADER_PUBLICPOSTING); + if (pubPosting == null) + pubPosting = enc.getHeaderBoolean(Constants.MSG_META_HEADER_PUBLICPOSTING); + if (pubPosting == null) + pubPosting = Constants.DEFAULT_ALLOW_PUBLIC_POSTS; + + Boolean pubReply = body.getHeaderBoolean(Constants.MSG_META_HEADER_PUBLICREPLY); + if (pubReply == null) + pubReply = enc.getHeaderBoolean(Constants.MSG_META_HEADER_PUBLICREPLY); + if (pubReply == null) + pubReply = Constants.DEFAULT_ALLOW_PUBLIC_REPLIES; + + Connection con = client.con(); + PreparedStatement stmt = null; + try { + stmt = con.prepareStatement(SQL_UPDATE_CHANNEL); + //"UPDATE channel SET + // encryptKey = ?, edition = ?, name = ?, description = ?, allowPubPost = ?, + // allowPubReply = ?, readKeyMissing = ?, pbePrompt = ? WHERE channelId = ?"; + if (encryptKey != null) + stmt.setBytes(1, encryptKey.getData()); + else + stmt.setNull(1, Types.VARBINARY); + stmt.setLong(2, edition); + if (name != null) + stmt.setString(3, name); + else + stmt.setNull(3, Types.VARCHAR); + if (desc != null) + stmt.setString(4, desc); + else + stmt.setNull(4, Types.VARCHAR); + stmt.setBoolean(5, pubPosting.booleanValue()); + stmt.setBoolean(6, pubReply.booleanValue()); + + boolean readKeyMissing = false; + String pbePrompt = null; + + // the metadata was authorized, but we couldn't decrypt the body. + // that can happen if we either don't have the passphrase or if we + // don't know the appropriate channel read key. + if (body instanceof UnreadableEnclosureBody) { + pbePrompt = enc.getHeaderString(Constants.MSG_HEADER_PBE_PROMPT); + if (pbePrompt == null) + readKeyMissing = true; + } + + stmt.setBoolean(7, readKeyMissing); + if (pbePrompt != null) + stmt.setString(8, pbePrompt); + else + stmt.setNull(8, Types.VARCHAR); + + stmt.setLong(9, channelId); + + if (stmt.executeUpdate() != 1) throw new SQLException("Unable to update the channel for " + ident.toBase64()); + return channelId; + } finally { + if (stmt != null) stmt.close(); + } + } + + /* + * CREATE CACHED TABLE channelTag ( + * channelId BIGINT + * , tag VARCHAR(64) + * , wasEncrypted BOOLEAN + * , PRIMARY KEY (channelId, tag) + * ); + */ + static final String SQL_DELETE_TAGS = "DELETE FROM channelTag WHERE channelId = ?"; + private static final String SQL_INSERT_TAG = "INSERT INTO channelTag (channelId, tag, wasEncrypted) VALUES (?, ?, ?)"; + private static void setTags(DBClient client, UI ui, long channelId, Enclosure enc, EnclosureBody body) throws SQLException { + Connection con = client.con(); + PreparedStatement stmt = null; + try { + stmt = con.prepareStatement(SQL_DELETE_TAGS); + //"DELETE FROM channelTag WHERE channelId = ?"; + stmt.setLong(1, channelId); + stmt.execute(); + } finally { + if (stmt != null) stmt.close(); + } + + String unencryptedTags[] = enc.getHeaderStrings(Constants.MSG_META_HEADER_TAGS); + String encryptedTags[] = body.getHeaderStrings(Constants.MSG_META_HEADER_TAGS); + try { + stmt = con.prepareStatement(SQL_INSERT_TAG); + if (unencryptedTags != null) { + for (int i = 0; i < unencryptedTags.length; i++) { + stmt.setLong(1, channelId); + stmt.setString(2, unencryptedTags[i]); + stmt.setBoolean(3, false); + stmt.executeUpdate(); // ignore rv, since the tag may already be there + } + } + if (encryptedTags != null) { + for (int i = 0; i < encryptedTags.length; i++) { + stmt.setLong(1, channelId); + stmt.setString(2, encryptedTags[i]); + stmt.setBoolean(3, true); + stmt.executeUpdate(); // ignore rv, since the tag may already be there + } + } + } finally { + if (stmt != null) stmt.close(); + } + } + + /* + * CREATE CACHED TABLE channelPostKey ( + * channelId BIGINT + * , authPubKey VARBINARY(256) + * , PRIMARY KEY (channelId, authPubKey) + * ); + */ + static final String SQL_DELETE_POSTKEYS = "DELETE FROM channelPostKey WHERE channelId = ?"; + private static final String SQL_INSERT_POSTKEY = "INSERT INTO channelPostKey (channelId, authPubKey) VALUES (?, ?)"; + private static void setPostKeys(DBClient client, UI ui, long channelId, Enclosure enc, EnclosureBody body) throws SQLException { + Connection con = client.con(); + PreparedStatement stmt = null; + try { + stmt = con.prepareStatement(SQL_DELETE_POSTKEYS); + //"DELETE FROM channelPostKey WHERE channelId = ?"; + stmt.setLong(1, channelId); + stmt.execute(); + } finally { + if (stmt != null) stmt.close(); + } + + SigningPublicKey unencKeys[] = enc.getHeaderSigningKeys(Constants.MSG_META_HEADER_POST_KEYS); + SigningPublicKey encKeys[] = body.getHeaderSigningKeys(Constants.MSG_META_HEADER_POST_KEYS); + try { + stmt = con.prepareStatement(SQL_INSERT_POSTKEY); + if (unencKeys != null) { + for (int i = 0; i < unencKeys.length; i++) { + stmt.setLong(1, channelId); + stmt.setBytes(2, unencKeys[i].getData()); + stmt.executeUpdate(); // ignore rv, since the key may already be there + } + } + if (encKeys != null) { + for (int i = 0; i < encKeys.length; i++) { + stmt.setLong(1, channelId); + stmt.setBytes(2, encKeys[i].getData()); + stmt.executeUpdate(); // ignore rv, since the key may already be there + } + } + } finally { + if (stmt != null) stmt.close(); + } + } + + /* + * CREATE CACHED TABLE channelManageKey ( + * channelId BIGINT + * , authPubKey VARBINARY(256) + * , PRIMARY KEY (channelId, authPubKey) + * ); + */ + static final String SQL_DELETE_MANAGEKEYS = "DELETE FROM channelManageKey WHERE channelId = ?"; + private static final String SQL_INSERT_MANAGEKEY = "INSERT INTO channelManageKey (channelId, authPubKey) VALUES (?, ?)"; + private static void setManageKeys(DBClient client, UI ui, long channelId, Enclosure enc, EnclosureBody body) throws SQLException { + Connection con = client.con(); + PreparedStatement stmt = null; + try { + stmt = con.prepareStatement(SQL_DELETE_MANAGEKEYS); + //"DELETE FROM channelManageKey WHERE channelId = ?"; + stmt.setLong(1, channelId); + stmt.execute(); + } finally { + if (stmt != null) stmt.close(); + } + + SigningPublicKey unencKeys[] = enc.getHeaderSigningKeys(Constants.MSG_META_HEADER_MANAGER_KEYS); + SigningPublicKey encKeys[] = body.getHeaderSigningKeys(Constants.MSG_META_HEADER_MANAGER_KEYS); + try { + stmt = con.prepareStatement(SQL_INSERT_MANAGEKEY); + if (unencKeys != null) { + for (int i = 0; i < unencKeys.length; i++) { + stmt.setLong(1, channelId); + stmt.setBytes(2, unencKeys[i].getData()); + stmt.executeUpdate(); // ignore rv, since the key may already be there + } + } + if (encKeys != null) { + for (int i = 0; i < encKeys.length; i++) { + stmt.setLong(1, channelId); + stmt.setBytes(2, encKeys[i].getData()); + stmt.executeUpdate(); // ignore rv, since the key may already be there + } + } + } finally { + if (stmt != null) stmt.close(); + } + } + + /* + * CREATE CACHED TABLE channelArchive ( + * channelId BIGINT + * , archiveId BIGINT + * , wasEncrypted BOOLEAN + * , PRIMARY KEY (channelId, archiveId) + * ); + * + * CREATE CACHED TABLE archive ( + * archiveId BIGINT PRIMARY KEY + * -- are we allowed to post (with the auth we have)? + * , postAllowed BOOLEAN + * -- are we allowed to pull messages (with the auth we have)? + * , readAllowed BOOLEAN + * -- index into uris.uriId to access the archive + * , uriId BIGINT + * ); + */ + static final String SQL_DELETE_ARCHIVE_URIS = "DELETE FROM uriAttribute WHERE uriId IN (SELECT uriId FROM archive WHERE archiveId IN (SELECT archiveId FROM channelArchive WHERE channelId = ?))"; + static final String SQL_DELETE_ARCHIVES = "DELETE FROM archive WHERE archiveId IN (SELECT archiveId FROM channelArchive WHERE channelId = ?)"; + static final String SQL_DELETE_CHAN_ARCHIVES = "DELETE FROM channelArchive WHERE channelId = ?"; + private static final String SQL_INSERT_ARCHIVE = "INSERT INTO archive (archiveId, postAllowed, readAllowed, uriId) VALUES (?, ?, ?, ?)"; + private static final String SQL_INSERT_CHAN_ARCHIVE = "INSERT INTO channelArchive (channelId, archiveId, wasEncrypted) VALUES (?, ?, ?)"; + + private static void setChannelArchives(DBClient client, UI ui, long channelId, Enclosure enc, EnclosureBody body) throws SQLException { + client.exec(SQL_DELETE_ARCHIVE_URIS, channelId); + client.exec(SQL_DELETE_ARCHIVES, channelId); + client.exec(SQL_DELETE_CHAN_ARCHIVES, channelId); + + addArchives(client, channelId, body.getHeaderURIs(Constants.MSG_META_HEADER_ARCHIVES), true); + addArchives(client, channelId, enc.getHeaderURIs(Constants.MSG_META_HEADER_ARCHIVES), false); + } + private static void addArchives(DBClient client, long channelId, SyndieURI archiveURIs[], boolean encrypted) throws SQLException { + if (archiveURIs == null) return; + Connection con = client.con(); + PreparedStatement archStmt = null; + PreparedStatement chanStmt = null; + try { + archStmt = con.prepareStatement(SQL_INSERT_ARCHIVE); + chanStmt = con.prepareStatement(SQL_INSERT_CHAN_ARCHIVE); + for (int i = 0; i < archiveURIs.length; i++) { + long uriId = client.addURI(archiveURIs[i]); + //"INSERT INTO archive (archiveId, postAllowed, readAllowed, uriId) VALUES (?, ?, ?, ?)"; + long archiveId = client.nextId("archiveIdSequence"); + archStmt.setLong(1, archiveId); + archStmt.setBoolean(2, false); + archStmt.setBoolean(3, true); + archStmt.setLong(4, uriId); + if (archStmt.executeUpdate() != 1) + throw new SQLException("Unable to insert the archive for uri " + uriId + "/" + channelId); + + //"INSERT INTO channelArchive (channelId, archiveId, wasEncrypted) VALUES (?, ?, ?)"; + chanStmt.setLong(1, channelId); + chanStmt.setLong(2, archiveId); + chanStmt.setBoolean(3, encrypted); + if (chanStmt.executeUpdate() != 1) + throw new SQLException("Unable to insert the channelArchive for uri " + uriId + "/" + channelId); + } + } finally { + if (archStmt != null) archStmt.close(); + if (chanStmt != null) chanStmt.close(); + } + } + + static final String SQL_DELETE_READ_KEYS = "DELETE FROM channelReadKey WHERE channelId = ?"; + private static void setChannelReadKeys(DBClient client, long channelId, Enclosure enc, EnclosureBody body) throws SQLException { + client.exec(SQL_DELETE_READ_KEYS, channelId); + addChannelReadKeys(client, channelId, body.getHeaderSessionKeys(Constants.MSG_META_HEADER_READKEYS)); + addChannelReadKeys(client, channelId, enc.getHeaderSessionKeys(Constants.MSG_META_HEADER_READKEYS)); + } + /* + * CREATE CACHED TABLE channelReadKey ( + * channelId BIGINT + * , keyStart DATE DEFAULT NULL + * , keyEnd DATE DEFAULT NULL + * , keyData VARBINARY(32) + * ); + */ + private static final String SQL_INSERT_CHANNEL_READ_KEY = "INSERT INTO channelReadKey (channelId, keyData) VALUES (?, ?)"; + private static void addChannelReadKeys(DBClient client, long channelId, SessionKey keys[]) throws SQLException { + if (keys == null) return; + Connection con = client.con(); + PreparedStatement stmt = null; + try { + stmt = con.prepareStatement(SQL_INSERT_CHANNEL_READ_KEY); + for (int i = 0; i < keys.length; i++) { + stmt.setLong(1, channelId); + stmt.setBytes(2, keys[i].getData()); + if (stmt.executeUpdate() != 1) + throw new SQLException("Unable to insert the channel read key"); + } + } finally { + if (stmt != null) stmt.close(); + } + } + /* + * CREATE CACHED TABLE channelMetaHeader ( + * channelId BIGINT + * , headerName VARCHAR(256) + * , headerValue VARCHAR(4096) + * , wasEncrypted BOOLEAN + * ); + */ + + static final String SQL_DELETE_CHANNEL_META_HEADER = "DELETE FROM channelMetaHeader WHERE channelId = ?"; + private static void setChannelMetaHeaders(DBClient client, long channelId, Enclosure enc, EnclosureBody body) throws SQLException { + client.exec(SQL_DELETE_CHANNEL_META_HEADER, channelId); + addChannelMetaHeaders(client, channelId, body.getHeaders(), true); + addChannelMetaHeaders(client, channelId, enc.getHeaders(), false); + } + private static final String SQL_INSERT_CHANNEL_META_HEADER = "INSERT INTO channelMetaHeader (channelId, headerName, headerValue, wasEncrypted) VALUES (?, ?, ?, ?)"; + private static void addChannelMetaHeaders(DBClient client, long channelId, Properties headers, boolean encrypted) throws SQLException { + if (headers == null) return; + Connection con = client.con(); + PreparedStatement stmt = null; + try { + stmt = con.prepareStatement(SQL_INSERT_CHANNEL_META_HEADER); + for (Iterator iter = headers.keySet().iterator(); iter.hasNext(); ) { + String name = (String)iter.next(); + String val = headers.getProperty(name); + //"INSERT INTO channelMetaHeader (channelId, headerName, headerValues, wasEncrypted) VALUES (?, ?, ?, ?)"; + stmt.setLong(1, channelId); + stmt.setString(2, name); + stmt.setString(3, val); + stmt.setBoolean(4, encrypted); + if (stmt.executeUpdate() != 1) + throw new SQLException("Unable to insert the channel meta header"); + } + } finally { + if (stmt != null) stmt.close(); + } + } + + static final String SQL_DELETE_CHANNEL_REF_URIS = "DELETE FROM uriAttribute WHERE uriId IN (SELECT uriId FROM channelReferenceGroup WHERE channelId = ?)"; + static final String SQL_DELETE_CHANNEL_REFERENCES = "DELETE FROM channelReferenceGroup WHERE channelId = ?"; + private static void setChannelReferences(DBClient client, long channelId, EnclosureBody body) throws SQLException { + client.exec(SQL_DELETE_CHANNEL_REF_URIS, channelId); + client.exec(SQL_DELETE_CHANNEL_REFERENCES, channelId); + RefWalker walker = new RefWalker(client, channelId); + // + for (int i = 0; i < body.getReferenceRootCount(); i++) { + ReferenceNode node = body.getReferenceRoot(i); + walker.visitRoot(node, i); + } + walker.done(); + } + + /* + * CREATE CACHED TABLE channelReferenceGroup ( + * channelId BIGINT + * , groupId INTEGER NOT NULL + * , parentGroupId INTEGER + * , siblingOrder INTEGER NOT NULL + * , name VARCHAR(256) + * , description VARCHAR(1024) + * , uriId BIGINT + * -- allows for references of 'ban', 'recommend', 'trust', etc + * , referenceType INTEGER DEFAULT NULL + * , wasEncrypted BOOLEAN + * , PRIMARY KEY (channelId, groupId) + * ); + */ + private static final String SQL_INSERT_CHANNEL_REFERENCE = "INSERT INTO channelReferenceGroup (channelId, groupId, parentGroupId, siblingOrder, name, description, uriId, referenceType, wasEncrypted) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)"; + private static class RefWalker { + private DBClient _client; + private long _channelId; + private long _nextId; + private PreparedStatement _stmt; + public RefWalker(DBClient client, long channelId) throws SQLException { + _client = client; + _channelId = channelId; + _nextId = 0; + _stmt = _client.con().prepareStatement(SQL_INSERT_CHANNEL_REFERENCE); + } + public void done() throws SQLException { _stmt.close(); } + public void visitRoot(ReferenceNode node, int branch) throws SQLException { visit(node, branch, null); } + private void visit(ReferenceNode node, int branch, Long parent) throws SQLException { + insertRef(node, _nextId, parent, branch); + Long cur = new Long(_nextId); + _nextId++; + for (int i = 0; i < node.getChildCount(); i++) + visit(node.getChild(i), i, cur); + } + //"INSERT INTO channelReferenceGroup + // (channelId, groupId, parentGroupId, siblingOrder, name, + // description, uriId, referenceType, wasEncrypted)"; + private void insertRef(ReferenceNode node, long groupId, Long parent, long branch) throws SQLException { + SyndieURI uri = node.getURI(); + long uriId = -1; + if (uri != null) + uriId = _client.addURI(uri); + _stmt.setLong(1, _channelId); + _stmt.setLong(2, groupId); + if (parent != null) + _stmt.setLong(3, parent.longValue()); + else + _stmt.setNull(3, Types.BIGINT); + _stmt.setLong(4, branch); + if (node.getName() != null) + _stmt.setString(5, node.getName()); + else + _stmt.setNull(5, Types.VARCHAR); + if (node.getDescription() != null) + _stmt.setString(6, node.getDescription()); + else + _stmt.setNull(6, Types.VARCHAR); + if (uriId != -1) + _stmt.setLong(7, uriId); + else + _stmt.setNull(7, Types.BIGINT); + if (node.getReferenceType() != null) + _stmt.setString(8, node.getReferenceType()); + else + _stmt.setNull(8, Types.VARCHAR); + _stmt.setBoolean(9, true); + if (_stmt.executeUpdate() != 1) + throw new SQLException("Adding a channel reference did not go through"); + } + } + + private static void saveToArchive(DBClient client, UI ui, Hash ident, Enclosure enc) { + File outDir = new File(client.getArchiveDir(), ident.toBase64()); + outDir.mkdirs(); + File outMeta = new File(outDir, "meta" + Constants.FILENAME_SUFFIX); + try { + enc.store(outMeta.getPath()); + ui.debugMessage("Metadata saved to the archive at " + outMeta.getPath()); + } catch (IOException ioe) { + ui.errorMessage("Error saving the metadata to the archive", ioe); + } + } +} diff --git a/src/syndie/db/ImportPost.java b/src/syndie/db/ImportPost.java new file mode 100644 index 0000000..1d028be --- /dev/null +++ b/src/syndie/db/ImportPost.java @@ -0,0 +1,875 @@ +package syndie.db; + +import java.io.File; +import java.io.IOException; +import java.util.*; +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.sql.Types; +import net.i2p.crypto.KeyGenerator; +import net.i2p.data.*; +import syndie.Constants; +import syndie.data.ChannelInfo; +import syndie.data.Enclosure; +import syndie.data.EnclosureBody; +import syndie.data.MessageInfo; +import syndie.data.ReferenceNode; +import syndie.data.SyndieURI; + +/** + * + */ +public class ImportPost { + private DBClient _client; + private UI _ui; + private long _nymId; + private String _pass; + private Enclosure _enc; + private EnclosureBody _body; + private SyndieURI _uri; + private Hash _channel; + private long _channelId; + private boolean _publishedBodyKey; + private boolean _privateMessage; + private boolean _authenticated; + private boolean _authorized; + private String _bodyPassphrase; + + private ImportPost(DBClient client, UI ui, Enclosure enc, long nymId, String pass, String bodyPassphrase) { + _client = client; + _ui = ui; + _enc = enc; + _nymId = nymId; + _pass = pass; + _privateMessage = false; + _bodyPassphrase = bodyPassphrase; + } + + /* + * The post message is ok if it is either signed by the channel's + * identity itself, one of the manager keys, one of the authorized keys, + * or the post's authentication key. the exit code in ui.commandComplete is + * -1 if unimportable, 0 if imported fully, or 1 if imported but not decryptable + */ + public static boolean process(DBClient client, UI ui, Enclosure enc, long nymId, String pass, String bodyPassphrase) { + ImportPost imp = new ImportPost(client, ui, enc, nymId, pass, bodyPassphrase); + return imp.process(); + } + private boolean process() { + _uri = _enc.getHeaderURI(Constants.MSG_HEADER_POST_URI); + if (_uri == null) { + _ui.errorMessage("No URI in the post"); + _ui.commandComplete(-1, null); + return false; + } + _channel = _uri.getScope(); + if (_channel == null) { + _ui.errorMessage("No channel in the URI: " + _uri); + _ui.commandComplete(-1, null); + return false; + } + + // first we check to ban posts by ANY author in a banned channel + if (_client.getBannedChannels().contains(_channel)) { + _ui.errorMessage("Not importing banned post in " + _channel.toBase64() + ": " + _uri); + _ui.commandComplete(-1, null); + return false; + } + /** was a published bodyKey used, rather than a secret readKey or replyKey? */ + _publishedBodyKey = false; + _body = null; + if (_enc.isReply()) { + List privKeys = _client.getReplyKeys(_channel, _nymId, _pass); + byte target[] = _enc.getHeaderBytes(Constants.MSG_HEADER_TARGET_CHANNEL); + if (target != null) + privKeys.addAll(_client.getReplyKeys(new Hash(target), _nymId, _pass)); + if ( (privKeys != null) && (privKeys.size() > 0) ) { + for (int i = 0; i < privKeys.size(); i++) { + PrivateKey priv = (PrivateKey)privKeys.get(i); + _ui.debugMessage("Attempting decrypt with key " + KeyGenerator.getPublicKey(priv).calculateHash().toBase64()); + try { + _body = new EnclosureBody(_client.ctx(), _enc.getData(), _enc.getDataSize(), priv); + _privateMessage = true; + _ui.debugMessage("Private decryption successful with key " + i); + break; + } catch (IOException ioe) { + // ignore + _ui.debugMessage("IO error attempting decryption " + i, ioe); + } catch (DataFormatException dfe) { + // ignore + _ui.debugMessage("DFE attempting decryption " + i, dfe); + } + } + if (_body == null) + _ui.errorMessage("None of the reply keys we have work for the message (we have " + privKeys.size() + " keys)"); + } + + if (_body == null) { + String prompt = _enc.getHeaderString(Constants.MSG_HEADER_PBE_PROMPT); + byte promptSalt[] = _enc.getHeaderBytes(Constants.MSG_HEADER_PBE_PROMPT_SALT); + if ( (prompt != null) && (promptSalt != null) && (promptSalt.length != 0) ) { + String passphrase = _bodyPassphrase; //args.getOptValue("passphrase"); + if (passphrase == null) { + _ui.errorMessage("Passphrase required to extract this message"); + _ui.errorMessage("Please use --passphrase 'passphrase value', where the passphrase value is the answer to:"); + _ui.errorMessage(CommandImpl.strip(prompt)); + _body = new UnreadableEnclosureBody(_client.ctx()); + } else { + SessionKey key = _client.ctx().keyGenerator().generateSessionKey(promptSalt, DataHelper.getUTF8(passphrase)); + try { + // decrypt it with that key + _body = new EnclosureBody(_client.ctx(), _enc.getData(), _enc.getDataSize(), key); + } catch (DataFormatException dfe) { + _ui.errorMessage("Invalid passphrase"); + _ui.debugMessage("Invalid passphrase cause", dfe); + _body = new UnreadableEnclosureBody(_client.ctx()); + } catch (IOException ioe) { + _ui.errorMessage("Invalid passphrase"); + _ui.debugMessage("Invalid passphrase cause", ioe); + _body = new UnreadableEnclosureBody(_client.ctx()); + } + } + } + } + + if (_body == null) { + _ui.errorMessage("Cannot import a reply that we do not have the private key to read"); + _body = new UnreadableEnclosureBody(_client.ctx()); + } + } else if (_enc.isPost()) { + // it can either be encrypted with a key in the public header or encrypted + // with one of the channel's read keys... + + SessionKey key = _enc.getHeaderSessionKey(Constants.MSG_HEADER_BODYKEY); + if (key != null) { + try { + // decrypt it with that key + _body = new EnclosureBody(_client.ctx(), _enc.getData(), _enc.getDataSize(), key); + _publishedBodyKey = true; + _ui.debugMessage("Published bodyKey was valid"); + } catch (DataFormatException dfe) { + _ui.errorMessage("Provided bodyKey is invalid", dfe); + _ui.commandComplete(-1, null); + return false; + } catch (IOException ioe) { + _ui.errorMessage("Provided bodyKey is invalid", ioe); + _ui.commandComplete(-1, null); + return false; + } + } else { + String prompt = _enc.getHeaderString(Constants.MSG_HEADER_PBE_PROMPT); + byte promptSalt[] = _enc.getHeaderBytes(Constants.MSG_HEADER_PBE_PROMPT_SALT); + if ( (prompt != null) && (promptSalt != null) && (promptSalt.length != 0) ) { + String passphrase = _bodyPassphrase; //args.getOptValue("passphrase"); + if (passphrase == null) { + _ui.errorMessage("Passphrase required to extract this message"); + _ui.errorMessage("Please use --passphrase 'passphrase value', where the passphrase value is the answer to:"); + _ui.errorMessage(CommandImpl.strip(prompt)); + _body = new UnreadableEnclosureBody(_client.ctx()); + } else { + key = _client.ctx().keyGenerator().generateSessionKey(promptSalt, DataHelper.getUTF8(passphrase)); + try { + // decrypt it with that key + _body = new EnclosureBody(_client.ctx(), _enc.getData(), _enc.getDataSize(), key); + } catch (DataFormatException dfe) { + _ui.errorMessage("Invalid passphrase [" + passphrase + "] salt [" + Base64.encode(promptSalt) + "]", dfe); + _body = new UnreadableEnclosureBody(_client.ctx()); + } catch (IOException ioe) { + _ui.errorMessage("Invalid passphrase [" + passphrase + "] salt [" + Base64.encode(promptSalt) + "]", ioe); + _body = new UnreadableEnclosureBody(_client.ctx()); + } + } + } else { + List keys = _client.getReadKeys(_channel, _nymId, _pass); + if ( (keys == null) || (keys.size() <= 0) ) { + _ui.errorMessage("No read keys known for " + _channel.toBase64()); + _body = new UnreadableEnclosureBody(_client.ctx()); + } + byte target[] = _enc.getHeaderBytes(Constants.MSG_HEADER_TARGET_CHANNEL); + if ( (target != null) && (target.length == Hash.HASH_LENGTH) ) { + List targetKeys = _client.getReadKeys(new Hash(target), _nymId, _pass); + keys.addAll(targetKeys); + } + for (int i = 0; i < keys.size(); i++) { + // try decrypting with that key + try { + _body = new EnclosureBody(_client.ctx(), _enc.getData(), _enc.getDataSize(), (SessionKey)keys.get(i)); + _ui.debugMessage("Known readKey was valid"); + break; + } catch (IOException ioe) { + _ui.debugMessage("Read key attempt failed, continuing...", ioe); + continue; + } catch (DataFormatException dfe) { + //dfe.printStackTrace(); + _ui.debugMessage("Read key attempt failed, continuing...", dfe); + continue; + } + } + if (_body == null) { + _ui.errorMessage("Read keys were unable to decrypt the post to " + _channel.toBase64()); + _body = new UnreadableEnclosureBody(_client.ctx()); + } + } + } + } else { + _ui.errorMessage("Not a post or a reply... wtf? " + _enc.getEnclosureType()); + _ui.commandComplete(-1, null); + return false; + } + + // now the body has been decrypted... + _channelId = _client.getChannelId(_channel); + if (_channelId == -1) { + _ui.errorMessage("Channel is not known: " + _channel.toBase64()); + _ui.commandComplete(-1, null); + return false; + } else { + _ui.debugMessage("Target channel is known: " + _channelId + "/" + _channel.toBase64()); + } + + _ui.debugMessage("private headers read: " + _body.getHeaders().toString()); + _ui.debugMessage("public headers read: " + _enc.getHeaders().toString()); + + // check authentication/authorization + _authenticated = false; + _authorized = false; + + // posts do not need to include an identity in their headers (though if they are + // neither identified nor authenticated, they'll be dropped) + Signature authenticationSig = _enc.getAuthenticationSig(); + byte authorVal[] = _body.getHeaderBytes(Constants.MSG_HEADER_AUTHOR); + if (authorVal == null) { // not a hidden author, maybe a publicly visible author? + authorVal = _enc.getHeaderBytes(Constants.MSG_HEADER_AUTHOR); + _ui.debugMessage("Not permuting the authentication signature (public)"); + } else { // hidden author, check to see if we need to permute authenticationSig + byte mask[] = _body.getHeaderBytes(Constants.MSG_HEADER_AUTHENTICATION_MASK); + if ( (mask != null) && (mask.length == Signature.SIGNATURE_BYTES) ) { + _ui.debugMessage("Permuting the authentication signature"); + byte realSig[] = DataHelper.xor(authenticationSig.getData(), mask); + authenticationSig.setData(realSig); + } else { + _ui.debugMessage("Not permuting the authentication signature"); + } + } + if ( (authorVal != null) && (authorVal.length == Hash.HASH_LENGTH) ) { + Hash authorHash = new Hash(authorVal); + SigningPublicKey pub = _client.getIdentKey(authorHash); + if (pub != null) { + _authenticated = _client.ctx().dsa().verifySignature(authenticationSig, _enc.getAuthenticationHash(), pub); + if (_authenticated) { + // now filter out banned authors who are posting in channels that + // aren't banned + if (_client.getBannedChannels().contains(authorHash)) { + _ui.errorMessage("Not importing post written by banned author " + authorHash.toBase64() + ": " + _uri); + _ui.commandComplete(-1, null); + return false; + } + } + } + } + + // includes managers, posters, and the owner + List signingPubKeys = _client.getAuthorizedPosters(_channel); + if (signingPubKeys == null) { + _ui.errorMessage("Internal error getting authorized posters for the channel"); + _ui.commandComplete(-1, null); + return false; + } + + Signature authorizationSig = _enc.getAuthorizationSig(); + Hash authorizationHash = _enc.getAuthorizationHash(); + for (int i = 0; i < signingPubKeys.size(); i++) { + SigningPublicKey pubKey = (SigningPublicKey)signingPubKeys.get(i); + boolean ok = _client.ctx().dsa().verifySignature(authorizationSig, authorizationHash, pubKey); + if (ok) { + _authorized = true; + break; + } + } + + if (_authenticated || _authorized) { + boolean ok = importMessage(); + if (ok) { + if (_body instanceof UnreadableEnclosureBody) + _ui.commandComplete(1, null); + else + _ui.commandComplete(0, null); + } else { + _ui.commandComplete(-1, null); + } + return ok; + } else { + _ui.errorMessage("Neither authenticated nor authorized. bugger off."); + _ui.commandComplete(-1, null); + return false; + } + } + + private boolean importMessage() { + _ui.debugMessage("Message is" + (_authenticated ? " authenticated" : " not authenticated") + + (_authorized ? " authorized" : " not authorized") + ": " + _body); + long msgId = _client.nextId("msgIdSequence"); + if (msgId < 0) { + _ui.errorMessage("Internal error with the database (GCJ/HSQLDB problem with sequences?)"); + return false; + } + _ui.debugMessage("importing new message with id " + msgId); + + try { + boolean added = insertToChannel(msgId); + if (!added) { + _ui.statusMessage("Already imported"); + return false; + } + setMessageHierarchy(msgId); + setMessageTags(msgId); + setMessageAttachments(msgId); + setMessagePages(msgId); + setMessageReferences(msgId); + + processControlActivity(); + + saveToArchive(_client, _ui, _channel, _enc); + return true; + } catch (SQLException se) { + _ui.errorMessage("Error importing the message", se); + return false; + } + } + + /** + * Cancel messages, overwrite messages, import channel keys, etc + */ + private void processControlActivity() throws SQLException { + // + } + + private static boolean isAuth(Set authorizedKeys, Hash ident) { + for (Iterator iter = authorizedKeys.iterator(); iter.hasNext(); ) { + SigningPublicKey key = (SigningPublicKey)iter.next(); + if (key.calculateHash().equals(ident)) + return true; + } + return false; + } + + private static final String SQL_INSERT_CHANNEL = "INSERT INTO channelMessage (" + + "msgId, authorChannelId, messageId, targetChannelId, subject, overwriteScopeHash, " + + "overwriteMessageId, forceNewThread, refuseReplies, wasEncrypted, wasPrivate, wasAuthorized, " + + "wasAuthenticated, isCancelled, expiration, importDate, scopeChannelId, wasPBE, " + + "readKeyMissing, replyKeyMissing, pbePrompt" + + ") VALUES (" + + "?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, NOW(), ?, ?, ?, ?, ?" + + ")"; + /** + * returns true if the message was inserted into the channel properly, false + * if the message was already in there or there was a problem + */ + private boolean insertToChannel(long msgId) throws SQLException { + Hash author = null; + if (_authenticated) { + byte authorVal[] = _body.getHeaderBytes(Constants.MSG_HEADER_AUTHOR); + if (authorVal == null) // not a hidden author, maybe a publicly visible author? + authorVal = _enc.getHeaderBytes(Constants.MSG_HEADER_AUTHOR); + if (authorVal == null) // we are authenticated, but implicitly, which means the channel's key was used + author = _channel; + else + author = new Hash(authorVal); + } + + Long messageId = _uri.getMessageId(); + + long scopeChannelId = _client.getChannelId(_channel); + long targetChannelId = scopeChannelId; + byte target[] = _body.getHeaderBytes(Constants.MSG_HEADER_TARGET_CHANNEL); + if (target != null) { + Hash targetHash = new Hash(target); + long targetId = _client.getChannelId(targetHash); + if (isAuthorizedFor(targetHash, targetId, author)) { + targetChannelId = targetId; + _authorized = true; + } + } else { + if (isAuthorizedFor(_channel, targetChannelId, author)) { + _authorized = true; + } + } + + String subject = _body.getHeaderString(Constants.MSG_HEADER_SUBJECT); + if (subject == null) + subject = _enc.getHeaderString(Constants.MSG_HEADER_SUBJECT); + + SyndieURI overwrite = _body.getHeaderURI(Constants.MSG_HEADER_OVERWRITE); + Hash overwriteHash = null; + Long overwriteMsg = null; + if (overwrite != null) { + overwriteHash = overwrite.getScope(); + overwriteMsg = overwrite.getMessageId(); + } + + Boolean forceNewThread = _body.getHeaderBoolean(Constants.MSG_HEADER_FORCE_NEW_THREAD); + if (forceNewThread == null) + forceNewThread = _enc.getHeaderBoolean(Constants.MSG_HEADER_FORCE_NEW_THREAD); + + Boolean refuseReplies = _body.getHeaderBoolean(Constants.MSG_HEADER_REFUSE_REPLIES); + if (refuseReplies == null) + refuseReplies = _enc.getHeaderBoolean(Constants.MSG_HEADER_REFUSE_REPLIES); + + boolean wasEncrypted = !_publishedBodyKey; + boolean wasPrivate = _privateMessage; + boolean wasPBE = _enc.getHeaderString(Constants.MSG_HEADER_PBE_PROMPT) != null; + + Date expiration = _body.getHeaderDate(Constants.MSG_HEADER_EXPIRATION); + if (expiration == null) + expiration = _enc.getHeaderDate(Constants.MSG_HEADER_EXPIRATION); + + long channelId = _client.getChannelId(_channel); + if (channelId < 0) { + _ui.errorMessage("Cannot import the post, as it was made in a channel we don't know"); + return false; + } + MessageInfo msg = _client.getMessage(channelId, _uri.getMessageId()); + if (msg != null) { + _ui.debugMessage("Existing message: " + msg.getInternalId()); + if ( (msg.getPassphrasePrompt() == null) && (!msg.getReadKeyUnknown()) && (!msg.getReplyKeyUnknown()) ) { + return false; + } else { + // we have the post, but don't have the passphrase or keys. So... + // delete it, then import it again clean + _ui.debugMessage("Known message was not decrypted, so lets drop it and try again..."); + _client.deleteFromDB(_uri, _ui); + msg = null; + } + } + _ui.debugMessage("No matching messages, continuing with insert.. (" + _uri.toString() + ")"); //author != null ? author.toBase64() : "no author") + ", for " + _uri + ", msgId=" + msgId + ")"); + + if (scopeChannelId < 0) { + _ui.errorMessage("The message's scope is not known"); + return false; + } + + long authorChannelId = -1; + if (author != null) + authorChannelId = _client.getChannelId(author); + + PreparedStatement stmt = null; + try { + stmt = _client.con().prepareStatement(SQL_INSERT_CHANNEL); + //"msgId, authorChannelId, messageId, targetChannelId, subject, overwriteScopeHash, " + + //"overwriteMessageId, forceNewThread, refuseReplies, wasEncrypted, wasPrivate, wasAuthorized, " + + //"wasAuthenticated, isCancelled, expiration, importDate, scopeChannelId, " + + //"readKeyMissing, replyKeyMissing, pbePrompt" + stmt.setLong(1, msgId); + + if (authorChannelId >= 0) + stmt.setLong(2, authorChannelId); + else + stmt.setNull(2, Types.BIGINT); + + if (messageId != null) + stmt.setLong(3, messageId.longValue()); + else + stmt.setNull(3, Types.BIGINT); + + stmt.setLong(4, targetChannelId); + + if (subject != null) + stmt.setString(5, subject); + else + stmt.setNull(5, Types.VARCHAR); + + if (overwriteHash != null) + stmt.setBytes(6, overwriteHash.getData()); + else + stmt.setNull(6, Types.VARBINARY); + + if (overwriteMsg != null) + stmt.setLong(7, overwriteMsg.longValue()); + else + stmt.setNull(7, Types.BIGINT); + + if (forceNewThread != null) + stmt.setBoolean(8, forceNewThread.booleanValue()); + else + stmt.setNull(8, Types.BOOLEAN); + + if (refuseReplies != null) + stmt.setBoolean(9, refuseReplies.booleanValue()); + else + stmt.setNull(9, Types.BOOLEAN); + + stmt.setBoolean(10, wasEncrypted); + stmt.setBoolean(11, wasPrivate); + stmt.setBoolean(12, _authorized); + stmt.setBoolean(13, _authenticated); + stmt.setBoolean(14, false); // cancelled + if (expiration != null) + stmt.setDate(15, new java.sql.Date(expiration.getTime())); + else + stmt.setNull(15, Types.DATE); + stmt.setLong(16, scopeChannelId); + stmt.setBoolean(17, wasPBE); + + //"readKeyMissing, replyKeyMissing, pbePrompt" + boolean readKeyMissing = false; + boolean replyKeyMissing = false; + String pbePrompt = null; + + // the metadata was authorized, but we couldn't decrypt the body. + // that can happen if we either don't have the passphrase or if we + // don't know the appropriate channel read key. + if (_body instanceof UnreadableEnclosureBody) { + pbePrompt = _enc.getHeaderString(Constants.MSG_HEADER_PBE_PROMPT); + if (pbePrompt == null) { + if (wasPrivate) + replyKeyMissing = true; + else + readKeyMissing = true; + } + } + + stmt.setBoolean(18, readKeyMissing); + stmt.setBoolean(19, replyKeyMissing); + if (pbePrompt != null) + stmt.setString(20, pbePrompt); + else + stmt.setNull(20, Types.VARCHAR); + + + int rows = stmt.executeUpdate(); + if (rows != 1) { + _ui.debugMessage("Post NOT imported (" + rows + ")"); + _ui.errorMessage("Error importing the post"); + return false; + } else { + _ui.debugMessage("Post imported..."); + return true; + } + } finally { + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + + /** + * the message may not be directly authorized for the given scope, but the + * target channel may either allow unauthorized posts (thereby authorizing it) + * or may allow unauthorized replies (and if we are replying to an authorized + * post, we are thereby authorized) + */ + private boolean isAuthorizedFor(Hash targetHash, long targetId, Hash author) { + if (targetId >= 0) { + ChannelInfo chanInfo = _client.getChannel(targetId); + if (chanInfo != null) { + if ( (author != null) && + (isAuth(chanInfo.getAuthorizedManagers(), author) || + isAuth(chanInfo.getAuthorizedPosters(), author) || + chanInfo.getIdentKey().calculateHash().equals(author)) ) { + // explicitly allowed to post to this channel + _ui.debugMessage("Message is explicitly authorized"); + return true; + } else if (chanInfo.getAllowPublicPosts()) { + // implicitly allowed to start new threads + _ui.debugMessage("Message is an unauthorized post to a chan that doesnt require auth, so allow it"); + return true; + } else if (chanInfo.getAllowPublicReplies()) { + SyndieURI parents[] = _body.getHeaderURIs(Constants.MSG_HEADER_REFERENCES); + if ( (parents != null) && (parents.length > 0) ) { + for (int i = 0; i < parents.length; i++) { + Hash scope = parents[i].getScope(); + if ( (scope != null) && (scope.equals(targetHash)) ) { + MessageInfo parentMsg = _client.getMessage(targetId, parents[i].getMessageId()); + if ( (parentMsg != null) && (parentMsg.getWasAuthorized()) ) { + // post is a reply to a message in the channel + _ui.debugMessage("Message is an unauthorized reply to an authorized post, so allow it"); + return true; + } + } + } + } // no parents, and !allowPublicPosts + } + } + } + return false; + } + + static final String SQL_DELETE_MESSAGE_HIERARCHY = "DELETE FROM messageHierarchy WHERE msgId = ?"; + private static final String SQL_INSERT_MESSAGE_PARENT = "INSERT INTO messageHierarchy (msgId, referencedChannelHash, referencedMessageId, referencedCloseness) VALUES (?, ?, ?, ?)"; + private void setMessageHierarchy(long msgId) throws SQLException { + SyndieURI refs[] = _body.getHeaderURIs(Constants.MSG_HEADER_REFERENCES); + if (refs == null) + refs = _enc.getHeaderURIs(Constants.MSG_HEADER_REFERENCES); + _client.exec(SQL_DELETE_MESSAGE_HIERARCHY, msgId); + if ( (refs != null) && (refs.length > 0) ) { + PreparedStatement stmt = null; + try { + stmt = _client.con().prepareStatement(SQL_INSERT_MESSAGE_PARENT); + int closeness = 1; + for (int i = 0; i < refs.length; i++) { + Hash chan = refs[i].getScope(); + Long msg = refs[i].getMessageId(); + if ( (chan != null) && (msg != null) ) { + //(msgId, referencedChannelHash, referencedMessageId, referencedCloseness) + stmt.setLong(1, msgId); + stmt.setBytes(2, chan.getData()); + stmt.setLong(3, msg.longValue()); + stmt.setInt(4, closeness); + stmt.executeUpdate(); + closeness++; + } + } + } finally { + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + } + + static final String SQL_DELETE_MESSAGE_TAGS = "DELETE FROM messageTag WHERE msgId = ?"; + private static final String SQL_INSERT_MESSAGE_TAG = "INSERT INTO messageTag (msgId, tag, isPublic) VALUES (?, ?, ?)"; + private void setMessageTags(long msgId) throws SQLException { + String privTags[] = _body.getHeaderStrings(Constants.MSG_HEADER_TAGS); + String pubTags [] = _enc.getHeaderStrings(Constants.MSG_HEADER_TAGS); + _client.exec(SQL_DELETE_MESSAGE_TAGS, msgId); + if ( ( (privTags != null) && (privTags.length > 0) ) || + ( (pubTags != null) && (pubTags.length > 0) ) ) { + PreparedStatement stmt = null; + try { + stmt = _client.con().prepareStatement(SQL_INSERT_MESSAGE_TAG); + insertTags(stmt, msgId, privTags, false); + insertTags(stmt, msgId, pubTags, true); + } finally { + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + } + private void insertTags(PreparedStatement stmt, long msgId, String tags[], boolean isPublic) throws SQLException { + if (tags != null) { + for (int i = 0; i < tags.length; i++) { + stmt.setLong(1, msgId); + stmt.setString(2, CommandImpl.strip(tags[i])); + stmt.setBoolean(3, isPublic); + stmt.executeUpdate(); + } + } + } + + static final String SQL_DELETE_MESSAGE_ATTACHMENTS = "DELETE FROM messageAttachment WHERE msgId = ?"; + static final String SQL_DELETE_MESSAGE_ATTACHMENT_DATA = "DELETE FROM messageAttachmentData WHERE msgId = ?"; + static final String SQL_DELETE_MESSAGE_ATTACHMENT_CONFIG = "DELETE FROM messageAttachmentConfig WHERE msgId = ?"; + private void setMessageAttachments(long msgId) throws SQLException { + _client.exec(SQL_DELETE_MESSAGE_ATTACHMENTS, msgId); + _client.exec(SQL_DELETE_MESSAGE_ATTACHMENT_DATA, msgId); + _client.exec(SQL_DELETE_MESSAGE_ATTACHMENT_CONFIG, msgId); + for (int i = 0; i < _body.getAttachments(); i++) + insertAttachment(msgId, i); + } + private static final String SQL_INSERT_MESSAGE_ATTACHMENT = "INSERT INTO messageAttachment (msgId, attachmentNum, attachmentSize, contentType, name, description) VALUES (?, ?, ?, ?, ?, ?)"; + private static final String SQL_INSERT_MESSAGE_ATTACHMENT_DATA = "INSERT INTO messageAttachmentData (msgId, attachmentNum, dataBinary) VALUES (?, ?, ?)"; + private static final String SQL_INSERT_MESSAGE_ATTACHMENT_CONFIG = "INSERT INTO messageAttachmentConfig (msgId, attachmentNum, dataString) VALUES (?, ?, ?)"; + private void insertAttachment(long msgId, int attachmentId) throws SQLException { + PreparedStatement stmt = null; + try { + byte data[] = _body.getAttachment(attachmentId); + String type = _body.getAttachmentConfigString(attachmentId, Constants.MSG_ATTACH_CONTENT_TYPE); + String name = _body.getAttachmentConfigString(attachmentId, Constants.MSG_ATTACH_NAME); + String desc = _body.getAttachmentConfigString(attachmentId, Constants.MSG_ATTACH_DESCRIPTION); + + String cfg = formatConfig(_body.getAttachmentConfig(attachmentId)); + + stmt = _client.con().prepareStatement(SQL_INSERT_MESSAGE_ATTACHMENT); + //(msgId, attachmentNum, attachmentSize, contentType, name, description) + stmt.setLong(1, msgId); + stmt.setInt(2, attachmentId); + stmt.setLong(3, data.length); + if (type != null) + stmt.setString(4, CommandImpl.strip(type)); + else + stmt.setNull(4, Types.VARCHAR); + if (name != null) + stmt.setString(5, CommandImpl.strip(name)); + else + stmt.setNull(5, Types.VARCHAR); + if (desc != null) + stmt.setString(6, CommandImpl.strip(desc)); + else + stmt.setNull(6, Types.VARCHAR); + stmt.executeUpdate(); + + stmt.close(); + + stmt = _client.con().prepareStatement(SQL_INSERT_MESSAGE_ATTACHMENT_DATA); + //(msgId, attachmentNum, dataBinary) + stmt.setLong(1, msgId); + stmt.setInt(2, attachmentId); + stmt.setBytes(3, data); + stmt.executeUpdate(); + stmt.close(); + + stmt = _client.con().prepareStatement(SQL_INSERT_MESSAGE_ATTACHMENT_CONFIG); + //(msgId, attachmentNum, dataBinary) + stmt.setLong(1, msgId); + stmt.setInt(2, attachmentId); + stmt.setString(3, cfg); + stmt.executeUpdate(); + } finally { + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + + static final String SQL_DELETE_MESSAGE_PAGES = "DELETE FROM messagePage WHERE msgId = ?"; + static final String SQL_DELETE_MESSAGE_PAGE_DATA = "DELETE FROM messagePageData WHERE msgId = ?"; + static final String SQL_DELETE_MESSAGE_PAGE_CONFIG = "DELETE FROM messagePageConfig WHERE msgId = ?"; + private void setMessagePages(long msgId) throws SQLException { + _client.exec(SQL_DELETE_MESSAGE_PAGES, msgId); + _client.exec(SQL_DELETE_MESSAGE_PAGE_DATA, msgId); + _client.exec(SQL_DELETE_MESSAGE_PAGE_CONFIG, msgId); + for (int i = 0; i < _body.getPages(); i++) + insertPage(msgId, i); + } + private static final String SQL_INSERT_MESSAGE_PAGE = "INSERT INTO messagePage (msgId, pageNum, contentType) VALUES (?, ?, ?)"; + private static final String SQL_INSERT_MESSAGE_PAGE_DATA = "INSERT INTO messagePageData (msgId, pageNum, dataString) VALUES (?, ?, ?)"; + private static final String SQL_INSERT_MESSAGE_PAGE_CONFIG = "INSERT INTO messagePageConfig (msgId, pageNum, dataString) VALUES (?, ?, ?)"; + private void insertPage(long msgId, int pageId) throws SQLException { + PreparedStatement stmt = null; + try { + byte data[] = _body.getPage(pageId); + String type = _body.getPageConfigString(pageId, Constants.MSG_PAGE_CONTENT_TYPE); + + String cfg = formatConfig(_body.getPageConfig(pageId)); + + stmt = _client.con().prepareStatement(SQL_INSERT_MESSAGE_PAGE); + //(msgId, pageNum, contentType) + stmt.setLong(1, msgId); + stmt.setInt(2, pageId); + if (type != null) + stmt.setString(3, CommandImpl.strip(type)); + else + stmt.setNull(3, Types.VARCHAR); + stmt.executeUpdate(); + + stmt.close(); + + stmt = _client.con().prepareStatement(SQL_INSERT_MESSAGE_PAGE_DATA); + //(msgId, pageNum, dataString) + stmt.setLong(1, msgId); + stmt.setInt(2, pageId); + if (data != null) + stmt.setString(3, DataHelper.getUTF8(data)); + else + stmt.setNull(3, Types.VARCHAR); + stmt.executeUpdate(); + stmt.close(); + + stmt = _client.con().prepareStatement(SQL_INSERT_MESSAGE_PAGE_CONFIG); + //(msgId, pageNum, dataString) + stmt.setLong(1, msgId); + stmt.setInt(2, pageId); + stmt.setString(3, cfg); + stmt.executeUpdate(); + } finally { + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + + private String formatConfig(Properties props) { + StringBuffer rv = new StringBuffer(); + for (Iterator iter = props.keySet().iterator(); iter.hasNext(); ) { + String key = (String)iter.next(); + String val = props.getProperty(key); + rv.append(CommandImpl.strip(key)).append('=').append(CommandImpl.strip(val)).append('\n'); + } + return rv.toString(); + } + + static final String SQL_DELETE_MESSAGE_REF_URIS = "DELETE FROM uriAttribute WHERE uriId IN (SELECT uriId FROM messageReference WHERE msgId = ?)"; + static final String SQL_DELETE_MESSAGE_REFS = "DELETE FROM messageReference WHERE msgId = ?"; + private void setMessageReferences(long msgId) throws SQLException { + _client.exec(SQL_DELETE_MESSAGE_REF_URIS, msgId); + _client.exec(SQL_DELETE_MESSAGE_REFS, msgId); + List refs = new ArrayList(); + for (int i = 0; i < _body.getReferenceRootCount(); i++) + refs.add(_body.getReferenceRoot(i)); + _ui.debugMessage("Importing reference roots: " + refs.size()); + InsertRefVisitor visitor = new InsertRefVisitor(msgId); + ReferenceNode.walk(refs, visitor); + if (visitor.getError() != null) { + _ui.errorMessage(visitor.getError()); + if (visitor.getException() != null) + throw visitor.getException(); + } + } + + private static final String SQL_INSERT_MESSAGE_REF = "INSERT INTO messageReference " + + "(msgId, referenceId, parentReferenceId, siblingOrder, name, description, uriId, refType)" + + " VALUES (?, ?, ?, ?, ?, ?, ?, ?)"; + private class InsertRefVisitor implements ReferenceNode.Visitor { + private long _msgId; + private int _node; + private SQLException _exception; + private String _err; + public InsertRefVisitor(long msgId) { + _msgId = msgId; + _node = 0; + _exception = null; + _err = null; + } + public SQLException getException() { return _exception; } + public String getError() { return _err; } + + public void visit(ReferenceNode node, int depth, int siblingOrder) { + if (_err != null) return; + + int referenceId = node.getTreeIndexNum(); + if (referenceId < 0) { + referenceId = _node; + node.setTreeIndexNum(referenceId); + } + int parentReferenceId = -1; + if (node.getParent() != null) + parentReferenceId = node.getParent().getTreeIndexNum(); + String name = node.getName(); + String desc = node.getDescription(); + String type = node.getReferenceType(); + long uriId = _client.addURI(node.getURI()); + _node++; + + PreparedStatement stmt = null; + try { + _ui.debugMessage("Importing reference: " + referenceId + ", uri " + uriId + ", type: " + type); + stmt = _client.con().prepareStatement(SQL_INSERT_MESSAGE_REF); + // (msgId, referenceId, parentReferenceId, siblingOrder, name, description, uriId, refType) + stmt.setLong(1, _msgId); + stmt.setInt(2, referenceId); + stmt.setInt(3, parentReferenceId); + stmt.setInt(4, siblingOrder); + stmt.setString(5, CommandImpl.strip(name)); + stmt.setString(6, CommandImpl.strip(desc)); + stmt.setLong(7, uriId); + stmt.setString(8, type); + stmt.executeUpdate(); + } catch (SQLException se) { + _exception = se; + _err = "Error inserting the reference"; + } finally { + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + } + + private static void saveToArchive(DBClient client, UI ui, Hash ident, Enclosure enc) { + SyndieURI uri = enc.getHeaderURI(Constants.MSG_HEADER_POST_URI); + if ( (uri == null) || (uri.getScope() == null) || (uri.getMessageId() == null) ) { + ui.errorMessage("Unable to save the post to the archive, as the uri was not ok: " + uri); + return; + } + + File outDir = new File(client.getArchiveDir(), ident.toBase64()); + outDir.mkdirs(); + File outMeta = new File(outDir, uri.getMessageId().longValue()+Constants.FILENAME_SUFFIX); + try { + enc.store(outMeta.getPath()); + ui.debugMessage("Post saved to the archive at " + outMeta.getPath()); + } catch (IOException ioe) { + ui.errorMessage("Error saving the metadata to the archive", ioe); + } + } +} diff --git a/src/syndie/db/Importer.java b/src/syndie/db/Importer.java new file mode 100644 index 0000000..2b9b2f2 --- /dev/null +++ b/src/syndie/db/Importer.java @@ -0,0 +1,232 @@ +package syndie.db; + +import java.io.File; +import java.io.FileInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.sql.SQLException; +import java.util.Collection; +import java.util.Collections; +import java.util.List; +import net.i2p.I2PAppContext; +import net.i2p.data.Hash; +import net.i2p.data.Signature; +import net.i2p.data.SigningPublicKey; +import syndie.Constants; +import syndie.data.Enclosure; + +/** + * Import a message for the user, using the keys known to that user and + * storing the data in the database they can access. + * CLI import + * --db $dbURL + * --login $login + * --pass $pass + * --in $filename + * [--passphrase $bodyPassphrase] + */ +public class Importer extends CommandImpl { + private DBClient _client; + private String _passphrase; + private boolean _wasPBE; + + public Importer(DBClient client, String pass) { + _client = client; + _passphrase = pass; + _wasPBE = false; + } + Importer() {} + public DBClient runCommand(Opts args, UI ui, DBClient client) { + if ( (client == null) || (!client.isLoggedIn()) ) { + List missing = args.requireOpts(new String[] { "db", "login", "pass", "in" }); + if (missing.size() > 0) { + ui.errorMessage("Invalid options, missing " + missing); + ui.commandComplete(-1, null); + return client; + } + } else { + List missing = args.requireOpts(new String[] { "in" }); + if (missing.size() > 0) { + ui.errorMessage("Invalid options, missing " + missing); + ui.commandComplete(-1, null); + return client; + } + } + try { + long nymId = -1; + if (args.dbOptsSpecified()) { + if (client == null) + client = new DBClient(I2PAppContext.getGlobalContext(), new File(TextEngine.getRootPath())); + else + client.close(); + client.connect(args.getOptValue("db")); + nymId = client.getNymId(args.getOptValue("login"), args.getOptValue("pass")); + if (DBClient.NYM_ID_LOGIN_UNKNOWN == nymId) { + ui.errorMessage("Unknown login '" + args.getOptValue("login") + "'"); + ui.commandComplete(-1, null); + return client; + } else if (DBClient.NYM_ID_PASSPHRASE_INVALID == nymId) { + ui.errorMessage("Invalid passphrase"); + ui.commandComplete(-1, null); + return client; + } + } else { + nymId = client.getLoggedInNymId(); + if (nymId < 0) { + ui.errorMessage("Login details required"); + ui.commandComplete(-1, null); + return client; + } + } + + File file = new File(args.getOptValue("in")); + if (!file.isFile()) { + ui.errorMessage("File does not exist"); + ui.commandComplete(-1, null); + return client; + } + + _client = client; + _passphrase = client.getPass(); + boolean ok = processMessage(ui, new FileInputStream(file), nymId, client.getPass(), args.getOptValue("passphrase")); + ui.debugMessage("Metadata processed"); + if (!ok) // successful imports specify whether they were decrypted (exit code of 0) or undecryptable (exit code of 1) + ui.commandComplete(-1, null); + } catch (SQLException se) { + ui.errorMessage("Invalid database URL", se); + ui.commandComplete(-1, null); + } catch (IOException ioe) { + ui.errorMessage("Error importing the message", ioe); + ui.commandComplete(-1, null); + //} finally { + // if (client != null) client.close(); + } + return client; + } + + public static void main(String args[]) { + try { + CLI.main(new String[] { "import", + "--db", "jdbc:hsqldb:file:/tmp/cli", + "--login", "j", + "--pass", "j", + "--in", "/tmp/metaOut" }); + } catch (Exception e) { e.printStackTrace(); } + } + + public static void omain(String args[]) { + if ( (args == null) || (args.length != 4) ) + throw new RuntimeException("Usage: Importer $dbURL $login $password $filenameToImport"); + DBClient client = null; + try { + client = new DBClient(I2PAppContext.getGlobalContext(), new File(TextEngine.getRootPath())); + client.connect(args[0]); + long nymId = client.getNymId(args[1], args[2]); + if (DBClient.NYM_ID_LOGIN_UNKNOWN == nymId) + throw new RuntimeException("Unknown login"); + else if (DBClient.NYM_ID_PASSPHRASE_INVALID == nymId) + throw new RuntimeException("Invalid passphrase"); + + File file = new File(args[3]); + if (!file.isFile()) + throw new RuntimeException("File does not exist"); + + Importer imp = new Importer(client, args[2]); + //imp.processMessage(new FileInputStream(file), nymId, args[2]); + } catch (SQLException se) { + throw new RuntimeException("Invalid database URL: " + se.getMessage(), se); + } finally { + if (client != null) client.close(); + } + } + + /** + * process the message, importing it if possible. If it was imported but + * could not be decrypted (meaning that it is authentic and/or authorized), + * it will fire ui.commandComplete with an exit value of 1. if it was imported + * and read, it will fire ui.commandComplete with an exit value of 0. otherwise, + * it will not fire an implicit ui.commandComplete. + */ + public boolean processMessage(UI ui, InputStream source, long nymId, String pass, String bodyPassphrase) throws IOException { + if (bodyPassphrase != null) + ui.debugMessage("Processing message with body passphrase " + bodyPassphrase); + else + ui.debugMessage("Processing message with no body passphrase"); + _wasPBE = false; + boolean rv = true; + boolean isMeta = false; + Enclosure enc = new Enclosure(source); + try { + String format = enc.getEnclosureType(); + if (format == null) { + throw new IOException("No enclosure type"); + } else if (!format.startsWith(Constants.TYPE_PREFIX)) { + throw new IOException("Unsupported enclosure format: " + format); + } + _wasPBE = (enc.getHeaderString(Constants.MSG_HEADER_PBE_PROMPT) != null); + + String type = enc.getHeaderString(Constants.MSG_HEADER_TYPE); + if (Constants.MSG_TYPE_META.equals(type)) { // validate and import metadata message + rv = importMeta(ui, enc, nymId, bodyPassphrase); + isMeta = true; + } else if (Constants.MSG_TYPE_POST.equals(type)) { // validate and import content message + rv = importPost(ui, enc, nymId, pass, bodyPassphrase); + } else if (Constants.MSG_TYPE_REPLY.equals(type)) { // validate and import reply message + rv = importPost(ui, enc, nymId, pass, bodyPassphrase); + } else { + throw new IOException("Invalid message type: " + type); + } + } finally { + enc.discardData(); + } + return rv; + } + /** was the last message processed encrypted with a passphrase? */ + public boolean wasPBE() { return _wasPBE; } + + protected boolean importMeta(UI ui, Enclosure enc, long nymId, String bodyPassphrase) { + // first check that the metadata is signed by an authorized key + if (verifyMeta(ui, enc)) { + return ImportMeta.process(_client, ui, enc, nymId, _passphrase, bodyPassphrase); + } else { + ui.errorMessage("meta does not verify"); + return false; + } + } + /** + * The metadata message is ok if it is either signed by the channel's + * identity itself or by one of the manager keys + */ + private boolean verifyMeta(UI ui, Enclosure enc) { + SigningPublicKey pubKey = enc.getHeaderSigningKey(Constants.MSG_META_HEADER_IDENTITY); + Signature sig = enc.getAuthorizationSig(); + boolean ok = verifySig(_client, sig, enc.getAuthorizationHash(), pubKey); + if (!ok) { + ui.debugMessage("authorization hash does not match identity (authHash: " + enc.getAuthorizationHash().toBase64() + " sig: " + sig.toBase64() + ")"); + SigningPublicKey pubKeys[] = enc.getHeaderSigningKeys(Constants.MSG_META_HEADER_MANAGER_KEYS); + if (pubKeys != null) { + for (int i = 0; i < pubKeys.length; i++) { + if (verifySig(_client, sig, enc.getAuthorizationHash(), pubKeys[i])) { + ui.debugMessage("authorization hash matches a manager key"); + ok = true; + break; + } else { + ui.debugMessage("authorization hash does not match manager key " + pubKeys[i].toBase64()); + } + } + } + } else { + ui.debugMessage("authorization hash matches"); + boolean authenticated = verifySig(_client, enc.getAuthenticationSig(), enc.getAuthenticationHash(), pubKey); + if (authenticated) + ui.debugMessage("authentication hash matches"); + else + ui.debugMessage("authentication hash does not match the identity, but that's alright"); + } + return ok; + } + + protected boolean importPost(UI ui, Enclosure enc, long nymId, String pass, String bodyPassphrase) { + return ImportPost.process(_client, ui, enc, nymId, pass, bodyPassphrase); + } +} diff --git a/src/syndie/db/KeyGen.java b/src/syndie/db/KeyGen.java new file mode 100644 index 0000000..7f96cb7 --- /dev/null +++ b/src/syndie/db/KeyGen.java @@ -0,0 +1,77 @@ +package syndie.db; + +import java.util.*; +import net.i2p.I2PAppContext; +import net.i2p.data.*; +import syndie.Constants; + +/** + *CLI keygen + * --type (signing|encryption|post) + * [--scope $base64(channelHash)] + * (--pubOut $pubKeyFile --privOut $privKeyFile | --sessionOut $sessionKeyFile) + */ +public class KeyGen extends CommandImpl { + KeyGen() {} + public DBClient runCommand(Opts args, UI ui, DBClient client) { + List missing = args.requireOpts(new String[] { "type" }); + if (missing.size() > 0) { + ui.errorMessage("Invalid options, missing " + missing); + ui.commandComplete(-1, null); + return client; + } + + String type = args.getOptValue("type"); + String scopeStr = args.getOptValue("scope"); + String pubOut = args.getOptValue("pubOut"); + String privOut = args.getOptValue("privOut"); + String sessOut = args.getOptValue("sessionOut"); + + Hash scope = null; + if (scopeStr != null) { + byte b[] = Base64.decode(scopeStr); + if ( (b != null) && (b.length == Hash.HASH_LENGTH) ) + scope = new Hash(b); + } + + if (Constants.KEY_FUNCTION_MANAGE.equals(type) || // DSA + Constants.KEY_FUNCTION_POST.equals(type) || // DSA + Constants.KEY_FUNCTION_REPLY.equals(type)) { // ElGamal + if ( (privOut == null) || (pubOut == null) || + (privOut.length() <= 0) || (pubOut.length() <= 0) ) { + ui.errorMessage("pubOut and privOut are required for asymmetric key types"); + ui.commandComplete(-1, null); + return client; + } else { + if (Constants.KEY_FUNCTION_REPLY.equals(type)) { // ElGamal + Object keys[] = I2PAppContext.getGlobalContext().keyGenerator().generatePKIKeypair(); + PublicKey pub = (PublicKey)keys[0]; + PrivateKey priv = (PrivateKey)keys[1]; + writeKey(ui, privOut, type, scope, priv.toBase64()); + writeKey(ui, pubOut, type + "-pub", scope, pub.toBase64()); + } else { // DSA + Object keys[] = I2PAppContext.getGlobalContext().keyGenerator().generateSigningKeypair(); + SigningPublicKey pub = (SigningPublicKey)keys[0]; + SigningPrivateKey priv = (SigningPrivateKey)keys[1]; + writeKey(ui, privOut, type, scope, priv.toBase64()); + writeKey(ui, pubOut, type + "-pub", scope, pub.toBase64()); + } + } + } else if (Constants.KEY_FUNCTION_READ.equals(type)) { // AES + if ( (sessOut == null) || (sessOut.length() <= 0) ) { + ui.errorMessage("sessionOut is required for symetric key types"); + ui.commandComplete(-1, null); + return client; + } else { + SessionKey key = I2PAppContext.getGlobalContext().keyGenerator().generateSessionKey(); + writeKey(ui, sessOut, type, scope, key.toBase64()); + } + } else { + ui.errorMessage("key type not known"); + ui.commandComplete(-1, null); + return client; + } + ui.commandComplete(0, null); + return client; + } +} diff --git a/src/syndie/db/KeyImport.java b/src/syndie/db/KeyImport.java new file mode 100644 index 0000000..1ce2ecf --- /dev/null +++ b/src/syndie/db/KeyImport.java @@ -0,0 +1,187 @@ +package syndie.db; + +import java.io.*; +import java.sql.*; +import java.util.List; +import net.i2p.I2PAppContext; +import net.i2p.data.*; +import syndie.Constants; +import syndie.data.NymKey; + +/** + *CLI keyimport + * --db $dbURL + * --login $login + * --pass $pass + * --keyfile $keyFile // keytype: (manage|reply|read)\nscope: $base64(channelHash)\nraw: $base64(data)\n + * [--authentic $boolean] + */ +public class KeyImport extends CommandImpl { + KeyImport() {} + public DBClient runCommand(Opts args, UI ui, DBClient client) { + if ( (client == null) || (!client.isLoggedIn()) ) { + List missing = args.requireOpts(new String[] { "db", "login", "pass", "keyfile" }); + if (missing.size() > 0) { + ui.errorMessage("Invalid options, missing " + missing); + ui.commandComplete(-1, null); + return client; + } + } + + String db = args.getOptValue("db"); + String login = args.getOptValue("login"); + String pass = args.getOptValue("pass"); + String keyFile = args.getOptValue("keyfile"); + boolean authentic = args.getOptBoolean("authentic", false); + + return importKey(ui, client, db, login, pass, keyFile, authentic); + } + + private DBClient importKey(UI ui, DBClient client, String db, String login, String pass, String keyFile, boolean authentic) { + File f = new File(keyFile); + if (!f.exists()) { + ui.errorMessage("Key file does not exist: " + keyFile); + ui.commandComplete(-1, null); + return client; + } + FileInputStream fin = null; + try { + fin = new FileInputStream(f); + String line = DataHelper.readLine(fin); + if (!line.startsWith("keytype: ") || (line.length() < ("keytype: ".length() + 1))) + throw new IOException("Invalid type line: " + line); + String type = line.substring("keytype: ".length()).trim(); + + line = DataHelper.readLine(fin); + if (!line.startsWith("scope: ") || (line.length() < ("scope: ".length() + 1))) + throw new IOException("Invalid scope line: " + line); + String scope = line.substring("scope: ".length()).trim(); + + line = DataHelper.readLine(fin); + if (!line.startsWith("raw: ") || (line.length() < ("raw: ".length() + 1))) + throw new IOException("Invalid raw line: " + line); + String raw = line.substring("raw: ".length()).trim(); + + byte scopeData[] = Base64.decode(scope); + if ( (scopeData != null) && (scopeData.length != Hash.HASH_LENGTH) ) + scopeData = null; + byte rawData[] = Base64.decode(raw); + + ui.debugMessage("importing from " + f.getPath() +": type=" + type + " scope=" + scope + " raw=" + raw); + client = importKey(ui, client, db, login, pass, type, new Hash(scopeData), rawData, authentic); + fin = null; + return client; + } catch (IOException ioe) { + ui.errorMessage("Error importing the key", ioe); + ui.commandComplete(-1, null); + return client; + } finally { + if (fin != null) try { fin.close(); } catch (IOException ioe) {} + } + } + + private static final String SQL_INSERT_KEY = "INSERT INTO nymKey " + + "(nymId, keyChannel, keyFunction, keyType, keyData, keySalt, authenticated, keyPeriodBegin, keyPeriodEnd)" + + " VALUES " + + "(?, ?, ?, ?, ?, ?, ?, NULL, NULL)"; + public static DBClient importKey(UI ui, DBClient client, String type, Hash scope, byte[] raw, boolean authenticated) { + return importKey(ui, client, null, null, null, type, scope, raw, authenticated); + } + public static DBClient importKey(UI ui, DBClient client, String db, String login, String pass, String type, Hash scope, byte[] raw, boolean authenticated) { + try { + long nymId = -1; + if ( (db != null) && (login != null) && (pass != null) ) { + if (client == null) + client = new DBClient(I2PAppContext.getGlobalContext(), new File(TextEngine.getRootPath())); + else + client.close(); + client.connect(db); + nymId = client.getNymId(login, pass); + } else if (client != null) { + nymId = client.getLoggedInNymId(); + login = client.getLogin(); + pass = client.getPass(); + } + if (nymId == -1) + throw new SQLException("Login unknown"); + else if (nymId == -2) + throw new SQLException("Password invalid"); + + List existing = client.getNymKeys(nymId, pass, scope, type); + for (int i = 0; i < existing.size(); i++) { + NymKey cur = (NymKey)existing.get(i); + if (DataHelper.eq(cur.getData(), raw)) { + ui.statusMessage("Key already imported (type: " + type + ", " + cur.getFunction() + "/" + + cur.getType() + " raw.length=" + raw.length + ", " + cur.getData().length); + //ui.commandComplete(0, null); + return client; + } + } + + if (Constants.KEY_FUNCTION_MANAGE.equals(type) || Constants.KEY_FUNCTION_POST.equals(type)) { + SigningPrivateKey priv = new SigningPrivateKey(raw); + SigningPublicKey pub = client.ctx().keyGenerator().getSigningPublicKey(priv); + if (pub.calculateHash().equals(scope)) { + ui.statusMessage("Importing an identity key for " + scope.toBase64()); + } else { + ui.debugMessage("Importing a key that is NOT an identity key for " + scope.toBase64() + "?"); + ui.debugMessage("calculated pub: " + pub.calculateHash().toBase64()); + ui.debugMessage("aka " + pub.toBase64()); + } + } + + byte salt[] = new byte[16]; + client.ctx().random().nextBytes(salt); + SessionKey saltedKey = client.ctx().keyGenerator().generateSessionKey(salt, DataHelper.getUTF8(pass)); + int pad = 16-(raw.length%16); + if (pad == 0) pad = 16; + byte pre[] = new byte[raw.length+pad]; + System.arraycopy(raw, 0, pre, 0, raw.length); + for (int i = 0; i < pad; i++) + pre[pre.length-1-i] = (byte)(pad&0xff); + byte encrypted[] = new byte[pre.length]; + client.ctx().aes().encrypt(pre, 0, encrypted, 0, saltedKey, salt, pre.length); + + Connection con = client.con(); + PreparedStatement stmt = con.prepareStatement(SQL_INSERT_KEY); + stmt.setLong(1, nymId); + stmt.setBytes(2, scope.getData()); + stmt.setString(3, type); + if (Constants.KEY_FUNCTION_READ.equals(type)) + stmt.setString(4, Constants.KEY_TYPE_AES256); + else if (Constants.KEY_FUNCTION_MANAGE.equals(type)) + stmt.setString(4, Constants.KEY_TYPE_DSA); + else if (Constants.KEY_FUNCTION_POST.equals(type)) + stmt.setString(4, Constants.KEY_TYPE_DSA); + else if (Constants.KEY_FUNCTION_REPLY.equals(type)) + stmt.setString(4, Constants.KEY_TYPE_ELGAMAL2048); + + stmt.setBytes(5, encrypted); + stmt.setBytes(6, salt); + stmt.setBoolean(7, authenticated); + int rows = stmt.executeUpdate(); + if (rows == 1) { + ui.statusMessage("Keys imported (type " + type + " scope " + scope.toBase64() + " hash:" + client.ctx().sha().calculateHash(raw).toBase64() + " rows " + rows + ")"); + } else { + throw new SQLException("Error importing keys: row count of " + rows); + } + con.commit(); + + ui.commandComplete(0, null); + } catch (SQLException se) { + ui.errorMessage("Error importing the key", se); + ui.commandComplete(-1, null); + } + + return client; + } + + public static void main(String args[]) { + try { + CLI.main(new String[] { "keyimport", + "--db", "jdbc:hsqldb:file:/tmp/cli", + "--login", "j", "--pass", "j", + "--keyfile", "/tmp/manageOut" }); + } catch (Exception e) { e.printStackTrace(); } + } +} diff --git a/src/syndie/db/KeyList.java b/src/syndie/db/KeyList.java new file mode 100644 index 0000000..ef9c332 --- /dev/null +++ b/src/syndie/db/KeyList.java @@ -0,0 +1,92 @@ +package syndie.db; + +import java.io.File; +import java.sql.SQLException; +import java.util.*; +import net.i2p.I2PAppContext; +import net.i2p.data.Hash; +import net.i2p.data.SigningPrivateKey; +import net.i2p.data.SigningPublicKey; +import syndie.Constants; +import syndie.data.NymKey; + +/** + *CLI keylist + * --db $url + * --login $login + * --pass $pass + * [--channel $base64(channelHash)] + * [--function (read|manage|reply|post)] + */ +public class KeyList extends CommandImpl { + KeyList() {} + public DBClient runCommand(Opts args, UI ui, DBClient client) { + if ( (client == null) || (!client.isLoggedIn()) ) { + List missing = args.requireOpts(new String[] { "db", "login", "pass" }); + if (missing.size() > 0) { + ui.errorMessage("Invalid options, missing " + missing); + ui.commandComplete(-1, null); + return client; + } + } + + try { + long nymId = -1; + if (args.dbOptsSpecified()) { + if (client == null) + client = new DBClient(I2PAppContext.getGlobalContext(), new File(TextEngine.getRootPath())); + else + client.close(); + nymId = client.connect(args.getOptValue("db"), args.getOptValue("login"), args.getOptValue("pass")); + if (nymId < 0) { + ui.errorMessage("Login invalid"); + ui.commandComplete(-1, null); + return client; + } + } + if ( (client != null) && (nymId < 0) ) + nymId = client.getLoggedInNymId(); + if (nymId < 0) { + ui.errorMessage("Not logged in and no db specified"); + ui.commandComplete(-1, null); + return client; + } + byte val[] = args.getOptBytes("channel"); + Hash channel = null; + if ( (val != null) && (val.length == Hash.HASH_LENGTH) ) + channel = new Hash(val); + String fn = args.getOptValue("function"); + List keys = client.getNymKeys(nymId, client.getPass(), channel, fn); + for (int i = 0; i < keys.size(); i++) { + NymKey key = (NymKey)keys.get(i); + ui.statusMessage(key.toString()); + if (Constants.KEY_TYPE_DSA.equals(key.getType())) { + SigningPrivateKey priv = new SigningPrivateKey(key.getData()); + SigningPublicKey pub = client.ctx().keyGenerator().getSigningPublicKey(priv); + Hash pubIdent = pub.calculateHash(); + if (key.getChannel().equals(pubIdent)) { + ui.statusMessage(" - verifies as an identity key (size: " + key.getData().length + "/" + SigningPrivateKey.KEYSIZE_BYTES + ")"); + } else { + ui.statusMessage(" - does not verify as an identity key (size: " + key.getData().length + "/" + SigningPrivateKey.KEYSIZE_BYTES + ")"); + } + } + } + ui.commandComplete(0, null); + } catch (SQLException se) { + ui.errorMessage("Invalid database URL", se); + ui.commandComplete(-1, null); + //} finally { + // if (client != null) client.close(); + } + return client; + } + + public static void main(String args[]) { + try { + CLI.main(new String[] { "keylist", + "--db", "jdbc:hsqldb:file:/tmp/cli", + "--login", "j", + "--pass", "j" }); + } catch (Exception e) { e.printStackTrace(); } + } +} diff --git a/src/syndie/db/LoginManager.java b/src/syndie/db/LoginManager.java new file mode 100644 index 0000000..a8c877e --- /dev/null +++ b/src/syndie/db/LoginManager.java @@ -0,0 +1,248 @@ +package syndie.db; + +import java.io.*; +import java.sql.SQLException; +import java.util.List; +import net.i2p.I2PAppContext; +import net.i2p.data.Hash; +import syndie.Constants; +import syndie.data.NymKey; + +/** + * register + * --db $jdbcURL + * --login $nymLogin + * --pass $nymPass + * --name $nymName + * [--root $dir] + * [--simple $boolean] // implies that successful registration should be followed by changen & keyimport, allowing all of their args on the cli + */ +public class LoginManager extends CommandImpl { + private DBClient _client; + public LoginManager(DBClient client) { _client = client; } + + LoginManager() {} + public DBClient runCommand(Opts args, UI ui, DBClient client) { + if ( (client == null) || (!client.isLoggedIn()) ) { + List missing = args.requireOpts(new String[] { "db", "login", "pass", "name" }); + if (missing.size() > 0) { + ui.errorMessage("Usage: register [--db $jdbcURL] --login $nym --pass $password --name $publicName [--simple $boolean]"); + ui.errorMessage("The JDBC URL can be an in-memory database (e.g. jdbc:hsqldb:mem:test),"); + ui.errorMessage("an on-disk database (jdbc:hsqldb:file:/some/path), "); + ui.errorMessage("a remote database (jdbc:hsqldb:hsql:hostname:port:dbName), or "); + ui.errorMessage("any other JDBC database URL"); + ui.errorMessage("The nym and password refer to the syndie-specific nym, not to the database"); + ui.errorMessage("The name is the publicly visible name of the nym (in their created blog)"); + ui.errorMessage("If simple is true (it is by default), it automatically creates a blog for the new nym,"); + ui.errorMessage("and imports all of the appropriate keys for the nym's account. If it is false, it simply"); + ui.errorMessage("creates the nym but without any channels or keys."); + ui.debugMessage("you have: " + args); + ui.commandComplete(-1, null); + return client; + } + } else { + List missing = args.requireOpts(new String[] { "login", "pass", "name" }); + if (missing.size() > 0) { + ui.errorMessage("Usage: register [--db $jdbcURL] --login $nym --pass $password --name $publicName [--simple $boolean]"); + ui.errorMessage("The JDBC URL can be an in-memory database (e.g. jdbc:hsqldb:mem:test),"); + ui.errorMessage("an on-disk database (jdbc:hsqldb:file:/some/path), "); + ui.errorMessage("a remote database (jdbc:hsqldb:hsql:hostname:port:dbName), or "); + ui.errorMessage("any other JDBC database URL"); + ui.errorMessage("The nym and password refer to the syndie-specific nym, not to the database"); + ui.errorMessage("The name is the publicly visible name of the nym (in their created blog)"); + ui.errorMessage("If simple is true (it is by default), it automatically creates a blog for the new nym,"); + ui.errorMessage("and imports all of the appropriate keys for the nym's account. If it is false, it simply"); + ui.errorMessage("creates the nym but without any channels or keys."); + ui.commandComplete(-1, null); + return client; + } + } + + try { + if (args.dbOptsSpecified()) { + if (client == null) { + String root = args.getOptValue("root"); + if (root == null) + root = TextEngine.getRootPath(); + client = new DBClient(I2PAppContext.getGlobalContext(), new File(root)); + client.connect(args.getOptValue("db")); + } else { + //client.close(); + } + } + long nymId = client.register(args.getOptValue("login"), args.getOptValue("pass"), args.getOptValue("name")); + if (DBClient.NYM_ID_LOGIN_ALREADY_EXISTS == nymId) { + ui.errorMessage("Login already exists"); + ui.commandComplete(-1, null); + return client; + } + } catch (SQLException se) { + ui.errorMessage("Invalid database URL", se); + ui.commandComplete(-1, null); + return client; + //} finally { + // if (client != null) client.close(); + } + + ui.statusMessage("Local nym created for " + args.getOptValue("login")); + + if (args.getOptBoolean("simple", true)) { + String login = client.getLogin(); + String pass = client.getPass(); + // log in to the new nym + client.getNymId(args.getOptValue("login"), args.getOptValue("pass")); + boolean ok = processSimple(args, client, ui); + client.getNymId(login, pass); // relogin to the orig login/pass (not the newly registered one) + if (!ok) + return client; + } + + ui.commandComplete(0, null); + return client; + } + + private static final boolean DELETE = true; + + private boolean processSimple(Opts args, DBClient client, UI ui) { + boolean loggedIn = client.isLoggedIn(); + File tmpDir = client.getTempDir(); //"~/.syndie/tmp/" + client.getLogin()); + if (!tmpDir.exists()) tmpDir.mkdirs(); + File metaOutFile = new File(tmpDir, "metaOut"); + File manageOutFile = new File(tmpDir, "manageOut"); + File replyOutFile = new File(tmpDir, "replyOut"); + + ChanGen cmd = new ChanGen(); + Opts changenOpts = new Opts(args); + changenOpts.addOptValue("pubTag", "blog"); + if (changenOpts.getOptValue("metaOut") != null) + metaOutFile = new File(changenOpts.getOptValue("metaOut")); + else + changenOpts.addOptValue("metaOut", metaOutFile.getPath()); + + if (changenOpts.getOptValue("keyManageOut") != null) + manageOutFile = new File(changenOpts.getOptValue("keyManageOut")); + else + changenOpts.addOptValue("keyManageOut", manageOutFile.getPath()); + + if (changenOpts.getOptValue("keyReplyOut") != null) + replyOutFile = new File(changenOpts.getOptValue("keyReplyOut")); + else + changenOpts.addOptValue("keyReplyOut", replyOutFile.getPath()); + changenOpts.setCommand("changen"); + NestedUI nestedUI = new NestedUI(ui); + client = cmd.runCommand(changenOpts, nestedUI, client); + if (nestedUI.getExitCode() < 0) { + ui.debugMessage("Failed in the nested changen command"); + ui.commandComplete(nestedUI.getExitCode(), null); + return false; + } + + ui.debugMessage("Channel created for the nym"); + + if (metaOutFile.exists()) { + // generated correctly, import the metadata and private keys + Importer msgImp = new Importer(); + Opts msgImpOpts = new Opts(); // $dbURL $login $password $filenameToImport + //msgImpOpts.setOptValue("db", args.getOptValue("db")); + //msgImpOpts.setOptValue("login", args.getOptValue("login")); + //msgImpOpts.setOptValue("pass", args.getOptValue("pass")); + msgImpOpts.setOptValue("in", metaOutFile.getPath()); + msgImpOpts.setCommand("import"); + nestedUI = new NestedUI(ui); + client = msgImp.runCommand(msgImpOpts, nestedUI, client); + if (nestedUI.getExitCode() < 0) { + ui.debugMessage("Failed in the nested import command (logged in? " + client.isLoggedIn() + "/" + loggedIn + ")"); + ui.commandComplete(nestedUI.getExitCode(), null); + return false; + } + ui.debugMessage("Blog channel metadata imported"); + + KeyImport imp = new KeyImport(); + Opts impOpts = new Opts(args); + impOpts.setOptValue("keyfile", manageOutFile.getPath()); + impOpts.setOptValue("authentic", "true"); + impOpts.setCommand("keyimport"); + nestedUI = new NestedUI(ui); + client = imp.runCommand(impOpts, nestedUI, client); + if (DELETE) + manageOutFile.delete(); + if (nestedUI.getExitCode() < 0) { + ui.debugMessage("Failed in the nested management key import command"); + ui.commandComplete(nestedUI.getExitCode(), null); + return false; + } + ui.debugMessage("Blog channel management key imported"); + + impOpts = new Opts(args); + impOpts.setOptValue("keyfile", replyOutFile.getPath()); + impOpts.setOptValue("authentic", "true"); + impOpts.setCommand("keyimport"); + nestedUI = new NestedUI(ui); + client = imp.runCommand(impOpts, nestedUI, client); + if (DELETE) + replyOutFile.delete(); + if (nestedUI.getExitCode() < 0) { + ui.debugMessage("Failed in the nested reply key import command"); + ui.commandComplete(nestedUI.getExitCode(), null); + return false; + } + ui.debugMessage("Blog chanel reply key imported"); + + Hash chan = null; + List keys = client.getNymKeys(client.getLoggedInNymId(), client.getPass(), null, Constants.KEY_FUNCTION_MANAGE); + if (keys.size() > 0) { + NymKey key = (NymKey)keys.get(0); + chan = key.getChannel(); + } + if (chan != null) { + File channelDir = new File(client.getOutboundDir(), chan.toBase64()); + File meta = new File(channelDir, "meta" + Constants.FILENAME_SUFFIX); + FileInputStream fis = null; + FileOutputStream fos = null; + try { + channelDir.mkdirs(); + fis = new FileInputStream(metaOutFile); + fos = new FileOutputStream(meta); + byte buf[] = new byte[4096]; + int read = -1; + while ( (read = fis.read(buf)) != -1) + fos.write(buf, 0, read); + fos.close(); + fis.close(); + fis = null; + fos = null; + metaOutFile.delete(); + ui.statusMessage("Sharable channel metadata saved to " + metaOutFile.getPath()); + } catch (IOException ioe) { + ui.errorMessage("Error migrating the metadata file to the output dir", ioe); + } finally { + if (fis != null) try { fis.close(); } catch (IOException ioe) {} + if (fos != null) try { fos.close(); } catch (IOException ioe) {} + } + } + } + return true; + } + + public static void main(String args[]) { + if ( (args == null) || (args.length == 0) ) + args = new String[] { "nymgen", "jdbc:hsqldb:mem:test", "jr", "jrPass", "jay arr" }; + if ( (args == null) || (args.length != 5) ) + throw new RuntimeException("Usage: LoginManager nymgen $dbURL $login $password $publicName"); + + DBClient client = null; + try { + client = new DBClient(I2PAppContext.getGlobalContext(), new File(TextEngine.getRootPath())); + client.connect(args[1]); + long nymId = client.register(args[2], args[3], args[4]); + if (DBClient.NYM_ID_LOGIN_ALREADY_EXISTS == nymId) + throw new RuntimeException("Login already exists"); + else + System.out.println("Registered as nymId " + nymId); + } catch (SQLException se) { + throw new RuntimeException("Invalid database URL: " + se.getMessage(), se); + } finally { + if (client != null) client.close(); + } + } +} diff --git a/src/syndie/db/ManageMenu.java b/src/syndie/db/ManageMenu.java new file mode 100644 index 0000000..8e295bf --- /dev/null +++ b/src/syndie/db/ManageMenu.java @@ -0,0 +1,1046 @@ +package syndie.db; + +import java.io.File; +import java.io.FileInputStream; +import java.io.FileOutputStream; +import java.io.IOException; +import java.net.URISyntaxException; +import java.text.ParseException; +import java.util.*; +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.text.SimpleDateFormat; +import net.i2p.data.Base64; +import net.i2p.data.Hash; +import net.i2p.data.SigningPrivateKey; +import net.i2p.data.SigningPublicKey; +import syndie.Constants; +import syndie.data.ArchiveInfo; +import syndie.data.ChannelInfo; +import syndie.data.Enclosure; +import syndie.data.NymKey; +import syndie.data.SyndieURI; + +/** + * + */ +class ManageMenu implements TextEngine.Menu { + private TextEngine _engine; + /** text description of each indexed channel */ + private List _itemText; + /** internal channel id (Long) for each indexed item */ + private List _itemKeys; + /** if true, the items refer to a list of channels matching the requested criteria */ + private boolean _itemIsChannelList; + /** refers to the next index into the item lists that the user should be shown */ + private int _itemIteratorIndex; + /** current channel the user is working on (if any) */ + private ChannelInfo _currentChannel; + /** filename to pull the channel avatar from */ + private String _avatar; + /** filename to pull the channel references from */ + private String _refs; + /** if true, don't publicize the keys to decrypt the metadata content, and put a private read key in it */ + private Boolean _encryptContent; + private String _bodyPassphrase; + private String _bodyPassphrasePrompt; + /** SigningPublicKey of listed nyms */ + private List _listedNymKeys; + + public ManageMenu(TextEngine engine) { + _engine = engine; + _itemText = new ArrayList(); + _itemKeys = new ArrayList(); + _listedNymKeys = new ArrayList(); + _itemIsChannelList = false; + _itemIteratorIndex = 0; + _currentChannel = null; + _avatar = null; + _refs = null; + _encryptContent = null; + } + + public static final String NAME = "manage"; + public String getName() { return NAME; } + public String getDescription() { return "channel management menu"; } + public boolean requireLoggedIn() { return true; } + public void listCommands(UI ui) { + ui.statusMessage(" channels : display a list of channels the current nym can manage"); + if (_itemIsChannelList) { + ui.statusMessage(" next [--lines $num]: paginate through the channels, 10 or $num at a time"); + ui.statusMessage(" prev [--lines $num]: paginate through the channels, 10 or $num at a time"); + } + ui.statusMessage(" meta [--channel ($index|$hash)] : display the channel's metadata"); + if (_currentChannel == null) { + ui.statusMessage(" create : begin the process of creating a new channel"); + ui.statusMessage(" update --channel ($index|$hash): begin the process of updating an existing channel"); + } else { + ui.statusMessage(" set [$opts] : set various options on the channel being created/updated,"); + ui.statusMessage(" : using the options from the ChanGen command"); + ui.statusMessage(" listnyms [--name $namePrefix] [--channel $hashPrefix]"); + ui.statusMessage(" : list locally known nyms matching the criteria"); + ui.statusMessage(" addnym (--nym $index | --key $base64(pubKey)) --action (manage|post)"); + ui.statusMessage(" removenym (--nym $index | --key $base64(pubKey)) --action (manage|post)"); + ui.statusMessage(" preview : summarize the channel configuration"); + ui.statusMessage(" execute --out $outputDir: create/update the channel, generating the metadata and "); + ui.statusMessage(" : private keys in the given dir, and importing them into the current "); + ui.statusMessage(" : nym. also clears the current create or update state"); + ui.statusMessage(" cancel : clear the current create|update state without updating anything"); + } + } + public boolean processCommands(DBClient client, UI ui, Opts opts) { + String cmd = opts.getCommand(); + if ("channels".equalsIgnoreCase(cmd)) { + processChannels(client, ui, opts); + } else if ("next".equalsIgnoreCase(cmd)) { + processNext(client, ui, opts); + } else if ("prev".equalsIgnoreCase(cmd)) { + processPrev(client, ui, opts); + } else if ("meta".equalsIgnoreCase(cmd)) { + processMeta(client, ui, opts); + } else if ("create".equalsIgnoreCase(cmd)) { + processCreate(client, ui, opts); + } else if ("update".equalsIgnoreCase(cmd)) { + processUpdate(client, ui, opts); + } else if ("cancel".equalsIgnoreCase(cmd)) { + _currentChannel = null; + _avatar = null; + _refs = null; + _encryptContent = null; + _bodyPassphrase = null; + _bodyPassphrasePrompt = null; + ui.statusMessage("Process cancelled"); + ui.commandComplete(0, null); + } else if ("set".equalsIgnoreCase(cmd)) { + processSet(client, ui, opts); + } else if ("listnyms".equalsIgnoreCase(cmd)) { + processListNyms(client, ui, opts); + } else if ("addnym".equalsIgnoreCase(cmd)) { + processAddNym(client, ui, opts); + } else if ("removenym".equalsIgnoreCase(cmd)) { + processRemoveNym(client, ui, opts); + } else if ("preview".equalsIgnoreCase(cmd)) { + processPreview(client, ui, opts); + } else if ("execute".equalsIgnoreCase(cmd)) { + processExecute(client, ui, opts); + } else { + return false; + } + return true; + } + public List getMenuLocation(DBClient client, UI ui) { + List rv = new ArrayList(); + rv.add("manage"); + if ( (_currentChannel != null) && (_currentChannel.getChannelHash() != null) ) { + rv.add("update " + CommandImpl.strip(_currentChannel.getName()) + "/" + _currentChannel.getChannelHash().toBase64().substring(0,6)); + } else if (_currentChannel != null) { + rv.add("create"); + } + return rv; + } + + private static final SimpleDateFormat _dayFmt = new SimpleDateFormat("yyyy/MM/dd"); + private static final String SQL_LIST_MANAGED_CHANNELS = "SELECT channelId FROM channelManageKey WHERE authPubKey = ?"; + /** channels */ + private void processChannels(DBClient client, UI ui, Opts opts) { + _itemIteratorIndex = 0; + _itemIsChannelList = true; + _itemKeys.clear(); + _itemText.clear(); + + List manageKeys = client.getNymKeys(client.getLoggedInNymId(), client.getPass(), null, Constants.KEY_FUNCTION_MANAGE); + List pubKeys = new ArrayList(); + // first, go through and find all the 'identity' channels - those that we have + // the actual channel signing key for + for (int i = 0; i < manageKeys.size(); i++) { + NymKey key = (NymKey)manageKeys.get(i); + if (key.getAuthenticated()) { + SigningPrivateKey priv = new SigningPrivateKey(key.getData()); + SigningPublicKey pub = client.ctx().keyGenerator().getSigningPublicKey(priv); + pubKeys.add(pub); + Hash chan = pub.calculateHash(); + long chanId = client.getChannelId(chan); + if (chanId >= 0) { + ChannelInfo info = client.getChannel(chanId); + _itemKeys.add(new Long(chanId)); + _itemText.add("Identity channel " + CommandImpl.strip(info.getName()) + " (" + chan.toBase64().substring(0,6) + "): " + CommandImpl.strip(info.getDescription())); + } + } else { + ui.debugMessage("Nym ky is not authenticated: " + key.getChannel().toBase64()); + } + } + + // now, go through and see what other channels our management keys are + // authorized to manage (beyond their identity channels) + Connection con = client.con(); + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = con.prepareStatement(SQL_LIST_MANAGED_CHANNELS); + for (int i = 0; i < pubKeys.size(); i++) { + SigningPublicKey key = (SigningPublicKey)pubKeys.get(i); + stmt.setBytes(1, key.getData()); + rs = stmt.executeQuery(); + while (rs.next()) { + // channelId + long chanId = rs.getLong(1); + if (!rs.wasNull()) { + Long id = new Long(chanId); + if (!_itemKeys.contains(id)) { + ChannelInfo info = client.getChannel(chanId); + if (info != null) { + _itemKeys.add(id); + _itemText.add("Authorized channel " + CommandImpl.strip(info.getName()) + " (" + info.getChannelHash().toBase64().substring(0,6) + "): " + CommandImpl.strip(info.getDescription())); + } + } + } + } + rs.close(); + } + } catch (SQLException se) { + ui.errorMessage("Internal error listing channels", se); + ui.commandComplete(-1, null); + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + + ui.statusMessage(_itemKeys.size() + " channels matched - use 'next' to view them"); + ui.commandComplete(0, null); + } + + /** next [--lines $num] : iterate through the channels */ + private void processNext(DBClient client, UI ui, Opts opts) { + int num = (int)opts.getOptLong("lines", 10); + String name = "channels"; + if (_itemIsChannelList) { + if (_itemIteratorIndex >= _itemKeys.size()) { + ui.statusMessage("No more " + name + " - use 'prev' to review earlier " + name); + ui.commandComplete(0, null); + } else { + int end = Math.min(_itemIteratorIndex+num, _itemKeys.size()); + ui.statusMessage(name + " " + _itemIteratorIndex + " through " + (end-1) + " of " + (_itemKeys.size()-1)); + while (_itemIteratorIndex < end) { + String desc = (String)_itemText.get(_itemIteratorIndex); + ui.statusMessage(_itemIteratorIndex + ": " + desc); + _itemIteratorIndex++; + } + int remaining = _itemKeys.size() - _itemIteratorIndex; + if (remaining > 0) + ui.statusMessage(remaining + " " + name + " remaining"); + else + ui.statusMessage("No more " + name + " - use 'prev' to review earlier " + name); + ui.commandComplete(0, null); + } + } else { + ui.statusMessage("Cannot iterate through the list, as no channels have been selected"); + ui.commandComplete(-1, null); + } + } + + /** prev [--lines $num] : iterate through the channels */ + private void processPrev(DBClient client, UI ui, Opts opts) { + int num = (int)opts.getOptLong("lines", 10); + _itemIteratorIndex -= num; + if (_itemIteratorIndex < 0) + _itemIteratorIndex = 0; + processNext(client, ui, opts); + } + + private void processCreate(DBClient client, UI ui, Opts opts) { + if (_currentChannel != null) { + ui.errorMessage("Cannot create a new channel - an existing create process is already in progress"); + ui.errorMessage("Cancel or complete that process before continuing (with the cancel or execute commands)"); + ui.commandComplete(-1, null); + return; + } + _currentChannel = new ChannelInfo(); + _avatar = null; + _refs = null; + _encryptContent = null; + + // now populate it with some default values + SigningPublicKey nymPub = getNymPublicKey(client); + if (nymPub != null) { + ui.debugMessage("Nym identity channel public key guessed, adding it as a manager to the new channel"); + Set managers = new HashSet(); + managers.add(nymPub); + _currentChannel.setAuthorizedManagers(managers); + } else { + _currentChannel.setAuthorizedManagers(new HashSet()); + } + + _currentChannel.setAuthorizedPosters(new HashSet()); + _currentChannel.setPrivateArchives(new HashSet()); + _currentChannel.setPrivateHeaders(new Properties()); + _currentChannel.setPrivateTags(new HashSet()); + _currentChannel.setPublicArchives(new HashSet()); + _currentChannel.setPublicHeaders(new Properties()); + _currentChannel.setPublicTags(new HashSet()); + _currentChannel.setReadKeys(new HashSet()); + _currentChannel.setReferences(new ArrayList()); + + _currentChannel.setAllowPublicPosts(false); + _currentChannel.setAllowPublicReplies(false); + _currentChannel.setEdition(createEdition(client)); + _currentChannel.setName("Default channel name"); + + ui.statusMessage("Channel creation process initiated"); + ui.statusMessage("Please specify fields as necessary with 'set', and complete the"); + ui.statusMessage("channel creation process with 'execute', or cancel the process with 'cancel'"); + ui.commandComplete(0, null); + } + + SigningPublicKey getNymPublicKey(DBClient client) { + List manageKeys = client.getNymKeys(client.getLoggedInNymId(), client.getPass(), null, Constants.KEY_FUNCTION_MANAGE); + List pubKeys = new ArrayList(); + // find all the 'identity' channels - those that we have + // the actual channel signing key for + for (int i = 0; i < manageKeys.size(); i++) { + NymKey key = (NymKey)manageKeys.get(i); + if (key.getAuthenticated()) { + SigningPrivateKey priv = new SigningPrivateKey(key.getData()); + SigningPublicKey pub = client.ctx().keyGenerator().getSigningPublicKey(priv); + Hash chan = pub.calculateHash(); + long chanId = client.getChannelId(chan); + if (chanId >= 0) + pubKeys.add(pub); + } + } + if (pubKeys.size() == 1) { + return (SigningPublicKey)pubKeys.get(0); + } else { + return null; + } + } + + /** today's date, but with a randomized hhmmss.SSS component */ + private long createEdition(DBClient client) { + long now = System.currentTimeMillis(); + now -= (now % 24*60*60*1000); + now += client.ctx().random().nextLong(24*60*60*1000); + return now; + } + + /** update --channel ($index|$hash): begin the process of updating an existing channel */ + private void processUpdate(DBClient client, UI ui, Opts opts) { + if (_currentChannel != null) { + ui.errorMessage("Cannot update an existing channel - an existing create process is already in progress"); + ui.errorMessage("Cancel or complete that process before continuing (with the cancel or execute commands)"); + ui.commandComplete(-1, null); + return; + } + + String chan = opts.getOptValue("channel"); + if (chan == null) { + ui.errorMessage("Please specify the channel to update with --channel $index or --channel $hash"); + ui.commandComplete(-1, null); + return; + } + + try { + int index = Integer.parseInt(chan); + if ( (index >= 0) && (index < _itemKeys.size()) ) { + long id = ((Long)_itemKeys.get(index)).longValue(); + _currentChannel = client.getChannel(id); + } else { + ui.errorMessage("Channel index out of range (channel count: " + _itemKeys.size() + ")"); + ui.commandComplete(-1, null); + return; + } + } catch (NumberFormatException nfe) { + byte h[] = Base64.decode(chan); + if ( (h != null) && (h.length == Hash.HASH_LENGTH) ) { + long id = client.getChannelId(new Hash(h)); + if (id >= 0) { + _currentChannel = client.getChannel(id); + } else { + ui.errorMessage("Channel " + chan + " is not known"); + ui.commandComplete(-1, null); + return; + } + } + } + + if (_currentChannel == null) { + ui.errorMessage("Invalid channel requested: " + chan); + ui.commandComplete(-1, null); + return; + } + + // now populate it with some default values + long ed = createEdition(client); + if (ed <= _currentChannel.getEdition()) + ed = _currentChannel.getEdition() + client.ctx().random().nextLong(1000); + _currentChannel.setEdition(ed); + _avatar = null; + _refs = null; + _encryptContent = null; + + ui.statusMessage("Channel update process initiated"); + ui.statusMessage("Please specify fields as necessary with 'set', and complete the"); + ui.statusMessage("channel update process with 'execute', or cancel the process with 'cancel'"); + ui.commandComplete(0, null); + } + + private void processSet(DBClient client, UI ui, Opts opts) { + if (_currentChannel == null) { + ui.errorMessage("Create/update process not yet initiated"); + ui.commandComplete(-1, null); + return; + } + ui.debugMessage("updating fields: " + opts.getOptNames()); + String name = opts.getOptValue("name"); + if (name != null) { + _currentChannel.setName(CommandImpl.strip(name)); + ui.statusMessage("Updated channel name"); + } + + String desc = opts.getOptValue("description"); + if (desc != null) { + _currentChannel.setDescription(CommandImpl.strip(desc)); + ui.statusMessage("Updated channel description"); + } + + String avatar = opts.getOptValue("avatar"); + if (avatar != null) { + File f = new File(avatar); + if (f.exists()) { + if (f.length() > Constants.MAX_AVATAR_SIZE) { + ui.errorMessage("Avatar file is too large (" + f.length() + ", max " + Constants.MAX_AVATAR_SIZE + ")"); + } else { + _avatar = avatar; + ui.statusMessage("Updated channel avatar"); + } + } else { + ui.errorMessage("Avatar file does not exist"); + _avatar = null; + } + } + + String edVal = opts.getOptValue("edition"); + if (edVal != null) { + long ed = opts.getOptLong("edition", _currentChannel.getEdition()+client.ctx().random().nextLong(1000)); + if (ed >= 0) { + _currentChannel.setEdition(ed); + ui.statusMessage("Updated channel edition"); + } else { + ed = createEdition(client); + if (ed <= _currentChannel.getEdition()) + ed = _currentChannel.getEdition() + client.ctx().random().nextLong(1000); + _currentChannel.setEdition(ed); + ui.statusMessage("Updated channel edition randomly"); + } + } + + String exp = opts.getOptValue("expiration"); + if (exp != null) { + Date when = null; + try { + synchronized (_dayFmt) { + when = _dayFmt.parse(exp); + } + } catch (ParseException pe) { + when = null; + } + if (when != null) + _currentChannel.setExpiration(when.getTime()); + else + _currentChannel.setExpiration(-1); + ui.statusMessage("Updated channel expiration"); + } + + String val = opts.getOptValue("publicPosting"); + if (val != null) { + boolean post = opts.getOptBoolean("publicPosting", _currentChannel.getAllowPublicPosts()); + _currentChannel.setAllowPublicPosts(post); + ui.statusMessage("Updated channel public posting policy"); + } + + val = opts.getOptValue("publicReplies"); + if (val != null) { + boolean reply = opts.getOptBoolean("publicReplies", _currentChannel.getAllowPublicReplies()); + _currentChannel.setAllowPublicReplies(reply); + ui.statusMessage("Updated channel public replies policy"); + } + + List tags = opts.getOptValues("pubTag"); + if (tags != null) { + _currentChannel.setPublicTags(new HashSet(tags)); + ui.statusMessage("Updated channel public tags"); + } + tags = opts.getOptValues("privTag"); + if (tags != null) { + _currentChannel.setPrivateTags(new HashSet(tags)); + ui.statusMessage("Updated channel private tags"); + } + + List manageKeys = opts.getOptValues("manageKey"); + if (manageKeys != null) { + Set mkeys = new HashSet(); + for (int i = 0; i < manageKeys.size(); i++) { + String mkey = (String)manageKeys.get(i); + byte mkeyData[] = Base64.decode(mkey); + if ( (mkeyData != null) && (mkeyData.length == SigningPublicKey.KEYSIZE_BYTES) ) + mkeys.add(new SigningPublicKey(mkeyData)); + } + _currentChannel.setAuthorizedManagers(mkeys); + ui.statusMessage("Updated channel manager keys"); + } + + List postKeys = opts.getOptValues("postKey"); + if (postKeys != null) { + Set pkeys = new HashSet(); + for (int i = 0; i < postKeys.size(); i++) { + String pkey = (String)postKeys.get(i); + byte pkeyData[] = Base64.decode(pkey); + if ( (pkeyData != null) && (pkeyData.length == SigningPublicKey.KEYSIZE_BYTES) ) + pkeys.add(new SigningPublicKey(pkeyData)); + } + _currentChannel.setAuthorizedPosters(pkeys); + ui.statusMessage("Updated channel post keys"); + } + + String refs = opts.getOptValue("refs"); + if (refs != null) { + File f = new File(refs); + if (f.exists()) { + _refs = refs; + ui.statusMessage("Updated channel references file"); + } else { + ui.errorMessage("References file does not exist"); + _refs = null; + } + } + + List archives = opts.getOptValues("pubArchive"); + if (archives != null) { + Set infos = new HashSet(); + for (int i = 0; i < archives.size(); i++) { + String str = (String)archives.get(i); + try { + SyndieURI uri = new SyndieURI(str); + ArchiveInfo info = new ArchiveInfo(); + info.setArchiveId(-1); + info.setPostAllowed(false); + info.setReadAllowed(true); + info.setURI(uri); + infos.add(info); + } catch (URISyntaxException use) { + ui.errorMessage("Archive URI is not valid [" + str + "]"); + } + } + _currentChannel.setPublicArchives(infos); + ui.statusMessage("Updated channel public archives"); + } + archives = opts.getOptValues("privArchive"); + if (archives != null) { + Set infos = new HashSet(); + for (int i = 0; i < archives.size(); i++) { + String str = (String)archives.get(i); + try { + SyndieURI uri = new SyndieURI(str); + ArchiveInfo info = new ArchiveInfo(); + info.setArchiveId(-1); + info.setPostAllowed(false); + info.setReadAllowed(true); + info.setURI(uri); + infos.add(info); + } catch (URISyntaxException use) { + ui.errorMessage("Archive URI is not valid [" + str + "]"); + } + } + _currentChannel.setPrivateArchives(infos); + ui.statusMessage("Updated channel private archives"); + } + + String enc = opts.getOptValue("encryptContent"); + if (enc != null) { + _encryptContent = new Boolean(opts.getOptBoolean("encryptContent", false)); + ui.statusMessage("Updated channel encryption policy"); + } + + String passphrase = opts.getOptValue("bodyPassphrase"); + if (passphrase != null) + _bodyPassphrase = passphrase; + String prompt = opts.getOptValue("bodyPassphrasePrompt"); + if (prompt != null) + _bodyPassphrasePrompt = prompt; + + ui.statusMessage("Channel settings updated"); + ui.commandComplete(0, null); + } + + private static final String SQL_LIST_NYMS = "SELECT identKey, name FROM channel ORDER BY name ASC"; + /** listnyms [--name $namePrefix] [--channel $hashPrefix] */ + private void processListNyms(DBClient client, UI ui, Opts opts) { + if (_currentChannel == null) { + ui.errorMessage("No creation or update process in progress"); + ui.commandComplete(-1, null); + return; + } + String namePrefix = opts.getOptValue("name"); + String chanPrefix = opts.getOptValue("channel"); + + _listedNymKeys.clear(); + Connection con = client.con(); + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = con.prepareStatement(SQL_LIST_NYMS); + rs = stmt.executeQuery(); + List banned = client.getBannedChannels(); + while (rs.next()) { + byte pubKey[] = rs.getBytes(1); + String name = rs.getString(2); + if (pubKey != null) { + SigningPublicKey pk = new SigningPublicKey(pubKey); + Hash chan = pk.calculateHash(); + if (banned.contains(chan)) + continue; + if (namePrefix != null) { + if (name == null) + continue; + if (!name.startsWith(namePrefix)) + continue; + } + if (chanPrefix != null) { + if (!chan.toBase64().startsWith(chanPrefix)) + continue; + } + _listedNymKeys.add(pk); + ui.statusMessage(_listedNymKeys.size() + ": " + + (name != null ? CommandImpl.strip(name) : "") + + " (" + chan.toBase64() + ")"); + } + } + } catch (SQLException se) { + ui.errorMessage("Internal error listing nyms", se); + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + /** addnym (--nym $index | --key $base64(pubKey)) --action (manage|post) */ + private void processAddNym(DBClient client, UI ui, Opts opts) { + if (_currentChannel == null) { + ui.errorMessage("No creation or update process in progress"); + ui.commandComplete(-1, null); + return; + } + SigningPublicKey key = null; + int index = (int)opts.getOptLong("nym", -1); + if (index > 0) { + if (_listedNymKeys.size() < index) { + ui.errorMessage("Index is out of range (size=" + _listedNymKeys.size() + ")"); + ui.commandComplete(-1, null); + return; + } else { + key = (SigningPublicKey)_listedNymKeys.get(index-1); + } + } else { + byte data[] = opts.getOptBytes("key"); + if ( (data != null) && (data.length == SigningPublicKey.KEYSIZE_BYTES) ) { + key = new SigningPublicKey(data); + } + } + + boolean manage = false; + boolean post = false; + String action = opts.getOptValue("action"); + if (action != null) { + if ("manage".equalsIgnoreCase(action)) + manage = true; + else if ("post".equalsIgnoreCase(action)) + post = true; + } + + if ( (key == null) || (!manage && !post)) { + ui.errorMessage("Usage: addnym (--nym $index | --key $base64(pubKey)) --action (manage|post)"); + ui.commandComplete(-1, null); + return; + } + + if (manage) { + Set managers = _currentChannel.getAuthorizedManagers(); + if (managers == null) + managers = new HashSet(); + managers.add(key); + ui.statusMessage("Key " + key.calculateHash().toBase64() + " added to the managers list"); + } else { + Set posters = _currentChannel.getAuthorizedPosters(); + if (posters == null) + posters = new HashSet(); + posters.add(key); + ui.statusMessage("Key " + key.calculateHash().toBase64() + " added to the posters list"); + } + ui.commandComplete(0, null); + } + /** removenym (--nym $index | --key $base64(pubKey)) --action (manage|post) */ + private void processRemoveNym(DBClient client, UI ui, Opts opts) { + + if (_currentChannel == null) { + ui.errorMessage("No creation or update process in progress"); + ui.commandComplete(-1, null); + return; + } + SigningPublicKey key = null; + int index = (int)opts.getOptLong("nym", -1); + if (index > 0) { + if (_listedNymKeys.size() < index) { + ui.errorMessage("Index is out of range (size=" + _listedNymKeys.size() + ")"); + ui.commandComplete(-1, null); + return; + } else { + key = (SigningPublicKey)_listedNymKeys.get(index-1); + } + } else { + byte data[] = opts.getOptBytes("key"); + if ( (data != null) && (data.length == SigningPublicKey.KEYSIZE_BYTES) ) { + key = new SigningPublicKey(data); + } + } + + boolean manage = false; + boolean post = false; + String action = opts.getOptValue("action"); + if (action != null) { + if ("manage".equalsIgnoreCase(action)) + manage = true; + else if ("post".equalsIgnoreCase(action)) + post = true; + } + + if ( (key == null) || (!manage && !post)) { + ui.errorMessage("Usage: removenym (--nym $index | --key $base64(pubKey)) --action (manage|post)"); + ui.commandComplete(-1, null); + return; + } + + if (manage) { + Set managers = _currentChannel.getAuthorizedManagers(); + if (managers == null) + managers = new HashSet(); + managers.remove(key); + _currentChannel.setAuthorizedManagers(managers); + ui.statusMessage("Key " + key.calculateHash().toBase64() + " removed from the managers list"); + } else { + Set posters = _currentChannel.getAuthorizedPosters(); + if (posters == null) + posters = new HashSet(); + posters.remove(key); + _currentChannel.setAuthorizedPosters(posters); + ui.statusMessage("Key " + key.calculateHash().toBase64() + " remove from the posters list"); + } + ui.commandComplete(0, null); + } + + private void processPreview(DBClient client, UI ui, Opts opts) { + if (_currentChannel == null) { + ui.errorMessage("No creation or update process in progress"); + ui.commandComplete(-1, null); + return; + } + ui.statusMessage(_currentChannel.toString()); + if (_avatar != null) + ui.statusMessage("Loading the channel avatar from: " + _avatar); + else + ui.statusMessage("Using the existing channel avatar"); + if (_encryptContent != null) + ui.statusMessage("Encrypt all content for authorized users only? " + _encryptContent.booleanValue()); + if (_refs != null) + ui.statusMessage("Loading the channel references from: " + _refs); + else + ui.statusMessage("No channel references source file defined"); + } + + private void processMeta(DBClient client, UI ui, Opts opts) { + long channelIndex = -1; + Hash channel = null; + String chan = opts.getOptValue("channel"); + if (chan != null) { + try { + long val = Long.parseLong(chan); + channelIndex = val; + } catch (NumberFormatException nfe) { + ui.debugMessage("channel requested is not an index (" + chan + ")"); + // ok, not an integer, maybe its a full channel hash? + byte val[] = Base64.decode(chan); + if ( (val != null) && (val.length == Hash.HASH_LENGTH) ) { + channel = new Hash(val); + ui.debugMessage("channel requested is a hash (" + channel.toBase64() + ")"); + } else { + ui.errorMessage("Channel requested is not valid - either specify --channel $index or --channel $base64(channelHash)"); + ui.commandComplete(-1, null); + return; + } + } + } + + ChannelInfo info = _currentChannel; + + long channelId = -1; + if ( (channelIndex >= 0) && (channelIndex < _itemKeys.size()) ) { + channelId = ((Long)_itemKeys.get((int)channelIndex)).longValue(); + info = client.getChannel(channelId); + } else if (channel != null) { + channelId = client.getChannelId(channel); + info = client.getChannel(channelId); + } + + if (info == null) { + ui.debugMessage("channelIndex=" + channelIndex + " itemKeySize: " + _itemKeys.size()); + ui.debugMessage("channel=" + channelIndex); + ui.errorMessage("Invalid or unknown channel requested"); + ui.commandComplete(-1, null); + return; + } + + ui.statusMessage(info.toString()); + } + + /** + * execute [--out $outputDir]: create/update the channel, generating the metadata and + * private keys in the given dir, and importing them into the current nym. also + * clears the current create or update state + */ + private void processExecute(DBClient client, UI ui, Opts opts) { + if (_currentChannel == null) { + ui.errorMessage("No create or update process in progress"); + ui.commandComplete(-1, null); + return; + } + + String out = opts.getOptValue("out"); + //if (out == null) { + // ui.errorMessage("You must specify a file to write the signed metadata to (with --out $filename)"); + // ui.commandComplete(-1, null); + // return; + //} + File tmpDir = client.getTempDir(); + tmpDir.mkdirs(); + File manageOut = null; + File replyOut = null; + File encPostOut = null; + File encMetaOut = null; + try { + manageOut = File.createTempFile("syndieManage", "dat", tmpDir); + replyOut = File.createTempFile("syndieReply", "dat", tmpDir); + encPostOut = File.createTempFile("syndieEncPost", "dat", tmpDir); + encMetaOut = File.createTempFile("syndieEncMeta", "dat", tmpDir); + if (out == null) { + out = File.createTempFile("syndieMetaOut", Constants.FILENAME_SUFFIX, tmpDir).getPath(); + } + } catch (IOException ioe) { + ui.errorMessage("Unable to create temporary files", ioe); + ui.commandComplete(-1, null); + return; + } + + Opts chanGenOpts = new Opts(); + chanGenOpts.setCommand("changen"); + chanGenOpts.setOptValue("name", _currentChannel.getName()); + chanGenOpts.setOptValue("description", _currentChannel.getDescription()); + chanGenOpts.setOptValue("avatar", _avatar); + chanGenOpts.setOptValue("edition", Long.toString(_currentChannel.getEdition())); + chanGenOpts.setOptValue("publicPosting", (_currentChannel.getAllowPublicPosts() ? Boolean.TRUE.toString() : Boolean.FALSE.toString())); + chanGenOpts.setOptValue("publicReplies", (_currentChannel.getAllowPublicReplies() ? Boolean.TRUE.toString() : Boolean.FALSE.toString())); + Set tags = _currentChannel.getPublicTags(); + if (tags != null) { + for (Iterator iter = tags.iterator(); iter.hasNext(); ) + chanGenOpts.addOptValue("pubTag", iter.next().toString()); + } + tags = _currentChannel.getPrivateTags(); + if (tags != null) { + for (Iterator iter = tags.iterator(); iter.hasNext(); ) + chanGenOpts.addOptValue("privTag", iter.next().toString()); + } + + SigningPublicKey us = getNymPublicKey(client); + + Set keys = _currentChannel.getAuthorizedPosters(); + if (keys != null) { + for (Iterator iter = keys.iterator(); iter.hasNext(); ) { + SigningPublicKey pub = (SigningPublicKey)iter.next(); + chanGenOpts.addOptValue("postKey", pub.toBase64()); + } + } + + keys = _currentChannel.getAuthorizedManagers(); + if (keys != null) { + for (Iterator iter = keys.iterator(); iter.hasNext(); ) { + SigningPublicKey pub = (SigningPublicKey)iter.next(); + chanGenOpts.addOptValue("manageKey", pub.toBase64()); + } + } + + chanGenOpts.setOptValue("refs", _refs); + + Set archives = _currentChannel.getPublicArchives(); + if (archives != null) { + for (Iterator iter = archives.iterator(); iter.hasNext(); ) { + ArchiveInfo archive = (ArchiveInfo)iter.next(); + chanGenOpts.addOptValue("pubArchive", archive.getURI().toString()); + } + } + archives = _currentChannel.getPrivateArchives(); + if (archives != null) { + for (Iterator iter = archives.iterator(); iter.hasNext(); ) { + ArchiveInfo archive = (ArchiveInfo)iter.next(); + chanGenOpts.addOptValue("privArchive", archive.getURI().toString()); + } + } + + if (_encryptContent != null) + chanGenOpts.setOptValue("encryptContent", _encryptContent.booleanValue() ? Boolean.TRUE.toString() : Boolean.FALSE.toString()); + + if (_currentChannel.getChannelId() >= 0) + chanGenOpts.setOptValue("channelId", Long.toString(_currentChannel.getChannelId())); + + chanGenOpts.setOptValue("metaOut", out); + chanGenOpts.setOptValue("keyManageOut", manageOut.getPath()); + chanGenOpts.setOptValue("keyReplyOut", replyOut.getPath()); + chanGenOpts.setOptValue("keyEncryptPostOut", encPostOut.getPath()); + chanGenOpts.setOptValue("keyEncryptMetaOut", encMetaOut.getPath()); + + if ( (_bodyPassphrase != null) && (_bodyPassphrasePrompt != null) ) { + chanGenOpts.setOptValue("bodyPassphrase", CommandImpl.strip(_bodyPassphrase)); + chanGenOpts.setOptValue("bodyPassphrasePrompt", CommandImpl.strip(_bodyPassphrasePrompt)); + } + + ChanGen cmd = new ChanGen(); + ui.debugMessage("Generating with options " + chanGenOpts); + NestedUI nestedUI = new NestedUI(ui); + cmd.runCommand(chanGenOpts, nestedUI, client); + + if ( (nestedUI.getExitCode() >= 0) && (opts.getOptValue("metaOut") == null) ) { + // ok, used the default dir - migrate it + FileInputStream fis = null; + FileOutputStream fos = null; + try { + fis = new FileInputStream(out); + Enclosure enc = new Enclosure(fis); + SigningPublicKey pub = enc.getHeaderSigningKey(Constants.MSG_META_HEADER_IDENTITY); + if (pub == null) { + ui.errorMessage("Unable to pull the channel from the enclosure"); + ui.commandComplete(-1, null); + return; + } else { + ui.debugMessage("Channel identity: " +pub.calculateHash().toBase64()); + } + File chanDir = new File(client.getOutboundDir(), pub.calculateHash().toBase64()); + chanDir.mkdirs(); + File mdFile = new File(chanDir, "meta" + Constants.FILENAME_SUFFIX); + fos = new FileOutputStream(mdFile); + fis = new FileInputStream(out); + byte buf[] = new byte[4096]; + int read = -1; + while ( (read = fis.read(buf)) != -1) + fos.write(buf, 0, read); + fis.close(); + fos.close(); + fis = null; + fos = null; + File outFile = new File(out); + outFile.delete(); + out = mdFile.getPath(); + ui.statusMessage("Sharable channel metadata saved to " + mdFile.getPath()); + } catch (IOException ioe) { + ui.errorMessage("Error migrating the channel metadata from " + out, ioe); + } finally { + if (fis != null) try { fis.close(); } catch (IOException ioe) {} + if (fos != null) try { fos.close(); } catch (IOException ioe) {} + } + } + + File outFile = new File(out); + if ( (nestedUI.getExitCode() >= 0) && (outFile.exists() && outFile.length() > 0) ) { + // channel created successfully, now import the metadata and keys, and delete + // the temporary files + ui.statusMessage("Channel metadata created and stored in " + outFile.getPath()); + + Importer msgImp = new Importer(); + Opts msgImpOpts = new Opts(); + msgImpOpts.setOptValue("in", out); + if (_bodyPassphrase != null) + msgImpOpts.setOptValue("passphrase", CommandImpl.strip(_bodyPassphrase)); + msgImpOpts.setCommand("import"); + NestedUI dataNestedUI = new NestedUI(ui); + ui.debugMessage("Importing with options " + msgImpOpts); + msgImp.runCommand(msgImpOpts, dataNestedUI, client); + if (dataNestedUI.getExitCode() < 0) { + ui.debugMessage("Failed in the nested import command"); + ui.commandComplete(dataNestedUI.getExitCode(), null); + return; + } + ui.statusMessage("Channel metadata imported"); + + KeyImport keyImp = new KeyImport(); + Opts keyOpts = new Opts(); + if (manageOut.length() > 0) { + keyOpts.setOptValue("keyfile", manageOut.getPath()); + keyOpts.setOptValue("authentic", "true"); + dataNestedUI = new NestedUI(ui); + keyImp.runCommand(keyOpts, dataNestedUI, client); + if (dataNestedUI.getExitCode() < 0) { + ui.errorMessage("Failed in the nested key import command"); + ui.commandComplete(dataNestedUI.getExitCode(), null); + return; + } + ui.statusMessage("Channel management key imported"); + } + if (replyOut.length() > 0) { + keyOpts = new Opts(); + keyOpts.setOptValue("keyfile", replyOut.getPath()); + keyOpts.setOptValue("authentic", "true"); + dataNestedUI = new NestedUI(ui); + keyImp.runCommand(keyOpts, dataNestedUI, client); + if (dataNestedUI.getExitCode() < 0) { + ui.errorMessage("Failed in the nested key import command"); + ui.commandComplete(dataNestedUI.getExitCode(), null); + return; + } + ui.statusMessage("Channel reply key imported"); + } + if (encPostOut.length() > 0) { + keyOpts = new Opts(); + keyOpts.setOptValue("keyfile", encPostOut.getPath()); + keyOpts.setOptValue("authentic", "true"); + dataNestedUI = new NestedUI(ui); + keyImp.runCommand(keyOpts, dataNestedUI, client); + if (dataNestedUI.getExitCode() < 0) { + ui.errorMessage("Failed in the nested key import command"); + ui.commandComplete(dataNestedUI.getExitCode(), null); + return; + } + ui.statusMessage("Channel post read key imported"); + } + if (encMetaOut.length() > 0) { + keyOpts = new Opts(); + keyOpts.setOptValue("keyfile", encMetaOut.getPath()); + keyOpts.setOptValue("authentic", "true"); + dataNestedUI = new NestedUI(ui); + keyImp.runCommand(keyOpts, dataNestedUI, client); + if (dataNestedUI.getExitCode() < 0) { + ui.errorMessage("Failed in the nested key import command"); + ui.commandComplete(dataNestedUI.getExitCode(), null); + return; + } + ui.statusMessage("Channel metadata read key imported"); + } + + manageOut.delete(); + replyOut.delete(); + encPostOut.delete(); + encMetaOut.delete(); + + _currentChannel = null; + _avatar = null; + _refs = null; + _encryptContent = null; + } + ui.commandComplete(nestedUI.getExitCode(), null); + } +} diff --git a/src/syndie/db/MessageExtract.java b/src/syndie/db/MessageExtract.java new file mode 100644 index 0000000..743c2c7 --- /dev/null +++ b/src/syndie/db/MessageExtract.java @@ -0,0 +1,369 @@ +package syndie.db; + +import java.io.*; +import java.sql.SQLException; +import java.util.*; +import net.i2p.I2PAppContext; +import net.i2p.data.*; +import syndie.Constants; +import syndie.data.Enclosure; +import syndie.data.EnclosureBody; +import syndie.data.SyndieURI; + +/** + *CLI messageextract + * --db $dbURL + * --login $login + * --pass $pass + * --in $filename // data is read from the given snd file + * --out $outDirectory // data is extracted to the given dir + * [--passphrase $passphrase] // use the passphrase for the PBE key derivation + */ +public class MessageExtract extends CommandImpl { + MessageExtract() {} + public DBClient runCommand(Opts args, UI ui, DBClient client) { + if ( (client == null) || (!client.isLoggedIn()) ) { + List missing = args.requireOpts(new String[] { "db", "login", "pass", "in", "out" }); + if (missing.size() > 0) { + ui.errorMessage("Invalid options, missing " + missing); + ui.commandComplete(-1, null); + return client; + } + } else { + List missing = args.requireOpts(new String[] { "in", "out" }); + if (missing.size() > 0) { + ui.errorMessage("Invalid options, missing " + missing); + ui.commandComplete(-1, null); + return client; + } + } + + try { + long nymId = -1; + if (args.dbOptsSpecified()) { + if (client == null) + client = new DBClient(I2PAppContext.getGlobalContext(), new File(TextEngine.getRootPath())); + else + client.close(); + nymId = client.connect(args.getOptValue("db"), args.getOptValue("login"), args.getOptValue("pass")); + } else { + nymId = client.getLoggedInNymId(); + } + if (nymId < 0) { + ui.errorMessage("Invalid login"); + ui.commandComplete(-1, null); + } else { + extract(client, ui, args, nymId); + } + } catch (SQLException se) { + ui.errorMessage("Invalid database URL", se); + ui.commandComplete(-1, null); + } catch (IOException ioe) { + ui.errorMessage("Error reading the message", ioe); + ui.commandComplete(-1, null); + //} finally { + // if (client != null) client.close(); + } + return client; + } + + private void extract(DBClient client, UI ui, Opts args, long nymId) throws SQLException, IOException { + FileInputStream in = new FileInputStream(args.getOptValue("in")); + Enclosure enc = new Enclosure(in); + try { + String format = enc.getEnclosureType(); + if (format == null) { + throw new IOException("No enclosure type"); + } else if (!format.startsWith(Constants.TYPE_PREFIX)) { + throw new IOException("Unsupported enclosure format: " + format); + } + + String type = enc.getHeaderString(Constants.MSG_HEADER_TYPE); + if (Constants.MSG_TYPE_POST.equals(type)) // validate and import content message + extractPost(client, ui, enc, nymId, args); + else if (Constants.MSG_TYPE_REPLY.equals(type)) // validate and import reply message + extractReply(client, ui, enc, nymId, args); + else + throw new IOException("Invalid message type: " + type); + } finally { + enc.discardData(); + } + } + + protected void extractPost(DBClient client, UI ui, Enclosure enc, long nymId, Opts args) throws IOException { + if (verifyPost(client, enc)) { + //ImportPost.process(_client, enc, nymId); + + EnclosureBody body = null; + SigningPublicKey ident = enc.getHeaderSigningKey(Constants.MSG_META_HEADER_IDENTITY); + SessionKey key = enc.getHeaderSessionKey(Constants.MSG_HEADER_BODYKEY); + if (key != null) { + try { + // decrypt it with that key + body = new EnclosureBody(client.ctx(), enc.getData(), enc.getDataSize(), key); + } catch (DataFormatException dfe) { + ui.errorMessage("Error decrypting with the published key", dfe); + ui.commandComplete(-1, null); + return; + } catch (IOException ioe) { + ui.debugMessage("Error decrypting with the published key", ioe); + return; + } + } else { + String prompt = enc.getHeaderString(Constants.MSG_HEADER_PBE_PROMPT); + byte promptSalt[] = enc.getHeaderBytes(Constants.MSG_HEADER_PBE_PROMPT_SALT); + if ( (prompt != null) && (promptSalt != null) && (promptSalt.length != 0) ) { + String passphrase = args.getOptValue("passphrase"); + if (passphrase == null) { + ui.errorMessage("Passphrase required to extract this message"); + ui.errorMessage("Please use --passphrase 'passphrase value', where the passphrase value is the answer to:"); + ui.errorMessage(strip(prompt)); + ui.commandComplete(-1, null); + return; + } else { + key = client.ctx().keyGenerator().generateSessionKey(promptSalt, DataHelper.getUTF8(passphrase)); + try { + // decrypt it with that key + body = new EnclosureBody(client.ctx(), enc.getData(), enc.getDataSize(), key); + } catch (DataFormatException dfe) { + ui.errorMessage("Invalid passphrase", dfe); + ui.commandComplete(-1, null); + return; + } catch (IOException ioe) { + ui.debugMessage("Invalid passphrase", ioe); + return; + } + } + } else { + Hash identHash = ident.calculateHash(); + List keys = client.getReadKeys(identHash, nymId, client.getPass()); + byte target[] = enc.getHeaderBytes(Constants.MSG_HEADER_TARGET_CHANNEL); + if ( (target != null) && (target.length == Hash.HASH_LENGTH) ) { + List targetKeys = client.getReadKeys(new Hash(target), client.getLoggedInNymId(), client.getPass()); + keys.addAll(targetKeys); + } + + for (int i = 0; keys != null && i < keys.size(); i++) { + // try decrypting with that key + try { + body = new EnclosureBody(client.ctx(), enc.getData(), enc.getDataSize(), (SessionKey)keys.get(i)); + break; + } catch (IOException ioe) { + ui.errorMessage("Error decrypting with the read key", ioe); + ui.commandComplete(-1, null); + return; + } catch (DataFormatException dfe) { + ui.debugMessage("Error decrypting with a read key", dfe); + continue; + } + } + if (body == null) { + ui.errorMessage("No read keys successful at decrypting the message"); + ui.commandComplete(-1, null); + return; + } + } + } + + ui.debugMessage("enclosure: " + enc + "\nbody: " + body); + extract(enc, ui, body, args); + } + } + + /** + * The post message is ok if it is either signed by the channel's + * identity itself, one of the manager keys, one of the authorized keys, + * or the post's authentication key + */ + private boolean verifyPost(DBClient client, Enclosure enc) { + if (true) return true; + SigningPublicKey pubKey = enc.getHeaderSigningKey(Constants.MSG_META_HEADER_IDENTITY); + Signature sig = enc.getAuthorizationSig(); + boolean ok = verifySig(client, sig, enc.getAuthorizationHash(), pubKey); + if (!ok) { + SigningPublicKey pubKeys[] = enc.getHeaderSigningKeys(Constants.MSG_META_HEADER_MANAGER_KEYS); + if (pubKeys != null) { + for (int i = 0; i < pubKeys.length; i++) { + if (verifySig(client, sig, enc.getAuthorizationHash(), pubKeys[i])) { + ok = true; + break; + } + } + } + } + return ok; + } + + private void extract(Enclosure enc, UI ui, EnclosureBody body, Opts args) throws IOException { + File dir = new File(args.getOptValue("out")); + if (dir.exists()) + throw new IOException("Output directory already exists: " + dir); + dir.mkdirs(); + for (int i = 0; i < body.getPages(); i++) { + File page = new File(dir, "page" + i + ".dat"); + FileOutputStream fos = new FileOutputStream(page); + fos.write(body.getPage(i)); + fos.close(); + + File cfg = new File(dir, "page" + i + ".cfg"); + fos = new FileOutputStream(cfg); + write(body.getPageConfig(i), fos); + fos.close(); + fos.close(); + } + for (int i = 0; i < body.getAttachments(); i++) { + File attach = new File(dir, "attach" + i + ".dat"); + FileOutputStream fos = new FileOutputStream(attach); + fos.write(body.getAttachment(i)); + fos.close(); + + File cfg = new File(dir, "attach" + i + ".cfg"); + fos = new FileOutputStream(cfg); + write(body.getAttachmentConfig(i), fos); + fos.close(); + } + File avatar = new File(dir, "avatar.png"); + InputStream in = body.getAvatar(); + if (in != null) { + FileOutputStream fos = new FileOutputStream(avatar); + byte buf[] = new byte[1024]; + int read = -1; + while ( (read = in.read(buf)) != -1) + fos.write(buf, 0, read); + fos.close(); + } + + FileOutputStream out = new FileOutputStream(new File(dir, "privHeaders.txt")); + write(body.getHeaders(), out); + out.close(); + out = new FileOutputStream(new File(dir, "pubHeaders.txt")); + write(enc.getHeaders(), out); + out.close(); + + ui.commandComplete(0, null); + } + + protected void extractReply(DBClient client, UI ui, Enclosure enc, long nymId, Opts args) throws IOException { + if (verifyReply(client, enc)) { + //ImportPost.process(_client, enc, nymId); + + EnclosureBody body = null; + SigningPublicKey ident = enc.getHeaderSigningKey(Constants.MSG_META_HEADER_IDENTITY); + SessionKey key = enc.getHeaderSessionKey(Constants.MSG_HEADER_BODYKEY); + if (key != null) { + try { + // decrypt it with that key + body = new EnclosureBody(client.ctx(), enc.getData(), enc.getDataSize(), key); + } catch (DataFormatException dfe) { + // ignore + ui.debugMessage("DFE decrypting with the published key", dfe); + } catch (IOException ioe) { + // ignore + ui.debugMessage("IOE decrypting with the published key", ioe); + } catch (ArrayIndexOutOfBoundsException e) { + // ignore + ui.debugMessage("Err decrypting with the published key", e); + } + } + + String prompt = enc.getHeaderString(Constants.MSG_HEADER_PBE_PROMPT); + byte promptSalt[] = enc.getHeaderBytes(Constants.MSG_HEADER_PBE_PROMPT_SALT); + if ( (prompt != null) && (promptSalt != null) && (promptSalt.length != 0) ) { + String passphrase = args.getOptValue("passphrase"); + if (passphrase == null) { + ui.errorMessage("Passphrase required to extract this message"); + ui.errorMessage("Please use --passphrase 'passphrase value', where the passphrase value is the answer to:"); + ui.errorMessage(strip(prompt)); + ui.commandComplete(-1, null); + return; + } else { + key = client.ctx().keyGenerator().generateSessionKey(promptSalt, DataHelper.getUTF8(passphrase)); + try { + // decrypt it with that key + body = new EnclosureBody(client.ctx(), enc.getData(), enc.getDataSize(), key); + } catch (DataFormatException dfe) { + ui.errorMessage("Invalid passphrase", dfe); + ui.commandComplete(-1, null); + return; + } catch (IOException ioe) { + ui.debugMessage("Invalid passphrase", ioe); + return; + } + } + } + + if (body == null) { + SyndieURI uri = enc.getHeaderURI(Constants.MSG_HEADER_POST_URI); + if (uri == null) { + ui.errorMessage("Cannot decrypt a reply if we don't know what channel it is on"); + ui.commandComplete(-1, null); + return; + } + Hash channel = uri.getScope(); //Channel(); + if (channel == null) { + ui.errorMessage("Cannot decrypt a reply if the URI doesn't have a channel in it - " + uri); + ui.commandComplete(-1, null); + return; + } + List keys = client.getReplyKeys(channel, nymId, client.getPass()); + for (int i = 0; keys != null && i < keys.size(); i++) { + // try decrypting with that key + try { + body = new EnclosureBody(client.ctx(), enc.getData(), enc.getDataSize(), (PrivateKey)keys.get(i)); + break; + } catch (IOException ioe) { + ui.errorMessage("Error decrypting with the reply key", ioe); + ui.commandComplete(-1, null); + return; + } catch (DataFormatException dfe) { + ui.debugMessage("Error decrypting with the reply key", dfe); + continue; + } + } + } + if (body == null) { + ui.errorMessage("No reply key was able to open the message"); + ui.commandComplete(-1, null); + return; + } + + ui.debugMessage("enclosure: " + enc + "\nbody: " + body); + extract(enc, ui, body, args); + } + } + + /** + * The post message is ok if it is either signed by the channel's + * identity itself, one of the manager keys, one of the authorized keys, + * or the post's authentication key + */ + private boolean verifyReply(DBClient client, Enclosure enc) { + if (true) return true; + SigningPublicKey pubKey = enc.getHeaderSigningKey(Constants.MSG_META_HEADER_IDENTITY); + Signature sig = enc.getAuthorizationSig(); + boolean ok = verifySig(client, sig, enc.getAuthorizationHash(), pubKey); + if (!ok) { + SigningPublicKey pubKeys[] = enc.getHeaderSigningKeys(Constants.MSG_META_HEADER_MANAGER_KEYS); + if (pubKeys != null) { + for (int i = 0; i < pubKeys.length; i++) { + if (verifySig(client, sig, enc.getAuthorizationHash(), pubKeys[i])) { + ok = true; + break; + } + } + } + } + return ok; + } + + public static void main(String args[]) { + try { + CLI.main(new String[] { "messageextract", + "--db", "jdbc:hsqldb:file:/tmp/cli", + "--login", "j", + "--pass", "j", + "--in", "/tmp/messageOut", + "--out", "/tmp/messageExtract" }); + } catch (Exception e) { e.printStackTrace(); } + } +} diff --git a/src/syndie/db/MessageGen.java b/src/syndie/db/MessageGen.java new file mode 100644 index 0000000..42f0877 --- /dev/null +++ b/src/syndie/db/MessageGen.java @@ -0,0 +1,730 @@ +package syndie.db; + +import gnu.crypto.hash.Sha256Standalone; +import java.io.*; +import java.net.URISyntaxException; +import java.sql.SQLException; +import java.util.*; +import java.util.zip.ZipEntry; +import java.util.zip.ZipOutputStream; +import net.i2p.I2PAppContext; +import net.i2p.data.*; +import syndie.Constants; +import syndie.data.EnclosureBody; +import syndie.data.NymKey; +import syndie.data.ReferenceNode; +import syndie.data.SyndieURI; + +/** + *CLI messagegen + * --db $dbURL + * --login $login // keys/etc are pulled from the db, but the + * --pass $pass // post itself is not imported via the CLI post + //* [--channel $base64(channelHash)]// required, unless --simple + * [--targetChannel $base64(channelHash)] + * [--scopeChannel $base64(channelHash)] + * (--page$n $filename --page$n-config $filename)* + * (--attach$n $filename --attach$n-config $filename)* + * [--authenticationKey $base64(privKey)] // what key to use to authenticate our post? + * [--authorizationKey $base64(privKey)] + * [--messageId $id] // if unspecified, randomize(trunc(now())) + * [--subject $subject] // short description of the post + * [--postAsUnauthorized $boolean] // if true, encrypt with a random key and publicize it in the BodyKey public header + * [--avatar $filename] // overrides the avatar listed in the postAs channel metadata + * [--encryptContent $boolean] // if true, encrypt the content with a known read key for the channel + * [--bodyPassphrase $passphrase --bodyPassphrasePrompt $prompt] + * // derive the body key from the passphrase, and include a publicly + * // visible hint to prompt it + * [--postAsReply $boolean] // if true, the post should be encrypted to the channel's reply key + * [--pubTag $tag]* // publicly visible tags + * [--privTag $tag]* // tags in the encrypted body + * [--refs $channelRefGroupFile] // ([\t]*$name\t$uri\t$refType\t$description\n)* lines + * (--cancel $uri)* // posts to be marked as cancelled (only honored if authorized to do so for those posts) + * [--overwrite $uri] // replace the $uri with the current post, if authorized to do so + * [--references $uri[,$uri]*] // ordered list of previous posts in the thread, newest first + * [--expiration $yyyymmdd] // date after which the post should be dropped + * [--forceNewThread $boolean] // if true, this post begins a new thread, even if there are references + * [--refuseReplies $boolean] // if true, only the author can reply to this post + * [--simple $boolean] // if true, default the $channel and $authenticationKey to the nym's blog, + * // the $authorizationKey to the nym's blog (or if the nym has a post or manage key for the target channel, + * // one of those keys), default --encryptContent to true if a readKey is known + * --out $filename + */ +public class MessageGen extends CommandImpl { + MessageGen() {} + public DBClient runCommand(Opts args, UI ui, DBClient client) { + if ( (client == null) || (!client.isLoggedIn()) ) { + List missing = args.requireOpts(new String[] { "db", "login", "pass", "out" }); + if (missing.size() > 0) { + ui.errorMessage("Invalid options, missing " + missing); + ui.commandComplete(-1, null); + return client; + } + } else { + List missing = args.requireOpts(new String[] { "out" }); + if (missing.size() > 0) { + ui.errorMessage("Invalid options, missing " + missing); + ui.commandComplete(-1, null); + return client; + } + } + + Hash targetChannel = null; + Hash scopeChannel = null; + + try { + long nymId = -1; + if (args.dbOptsSpecified()) { + if (client == null) + client = new DBClient(I2PAppContext.getGlobalContext(), new File(TextEngine.getRootPath())); + else + client.close(); + nymId = client.connect(args.getOptValue("db"), args.getOptValue("login"), args.getOptValue("pass")); + } else { + nymId = client.getLoggedInNymId(); + } + if (nymId < 0) { + ui.errorMessage("Invalid login"); + ui.commandComplete(-1, null); + return client; + } + + if (args.getOptBoolean("simple", true)) { + boolean ok = updateSimpleArgs(client, ui, nymId, args); + if (!ok) { + ui.commandComplete(-1, null); + return client; + } + } + + byte val[] = args.getOptBytes("scopeChannel"); + if ( (val == null) || (val.length != Hash.HASH_LENGTH) ) { + ui.errorMessage("Invalid scope channel"); + ui.commandComplete(-1, null); + return client; + } else { + scopeChannel = new Hash(val); + } + + long chanId = client.getChannelId(scopeChannel); + if (chanId < 0) { + ui.errorMessage("Cannot post to " + scopeChannel.toBase64() + ", as it isn't known locally"); + ui.commandComplete(-1, null); + return client; + } + + val = args.getOptBytes("targetChannel"); + if ( (val == null) || (val.length != Hash.HASH_LENGTH) ) { + // ok, targetting the scope channel + targetChannel = scopeChannel; + } else { + targetChannel = new Hash(val); + } + + long targetChanId = client.getChannelId(targetChannel); + if (targetChanId < 0) { + ui.errorMessage("Cannot target " + targetChannel.toBase64() + ", as it isn't known locally"); + ui.commandComplete(-1, null); + return client; + } + + boolean ok = false; + if (args.getOptBoolean("postAsReply", false)) + ok = genMessage(client, ui, nymId, chanId, targetChanId, scopeChannel, targetChannel, args, client.getReplyKey(targetChanId)); + else + ok = genMessage(client, ui, nymId, chanId, targetChanId, scopeChannel, targetChannel, args, null); + + if (ok) + ui.commandComplete(0, null); + else + ui.commandComplete(-1, null); + } catch (SQLException se) { + ui.errorMessage("Invalid database URL", se); + ui.commandComplete(-1, null); + //} finally { + // if (client != null) client.close(); + } + return client; + } + + private boolean genMessage(DBClient client, UI ui, long nymId, long scopeChannelId, long targetChannelId, Hash scopeChannel, Hash targetChannel, Opts args, PublicKey to) throws SQLException { + List readKeys = client.getReadKeys(targetChannel, nymId, client.getPass()); + SessionKey bodyKey = null; + boolean postAsUnauthorized = args.getOptBoolean("postAsUnauthorized", false); + + if ( (readKeys == null) || (readKeys.size() <= 0) ) { + if (!postAsUnauthorized) { + ui.errorMessage("We are not authorized to post (or don't have any keys to post with) and "); + ui.errorMessage("we haven't been asked to --postAsUnauthorized. aborting."); + return false; + } + } + + SigningPrivateKey authorizationPrivate = null; + SigningPrivateKey authenticationPrivate = null; + + List targetSignKeys = client.getSignKeys(targetChannel, nymId, client.getPass()); + Map signKeyHashes = new HashMap(); + for (Iterator iter = targetSignKeys.iterator(); iter.hasNext(); ) { + SigningPrivateKey key = (SigningPrivateKey)iter.next(); + signKeyHashes.put(key.calculateHash(), key); + } + List scopeSignKeys = client.getSignKeys(scopeChannel, nymId, client.getPass()); + for (Iterator iter = scopeSignKeys.iterator(); iter.hasNext(); ) { + SigningPrivateKey key = (SigningPrivateKey)iter.next(); + signKeyHashes.put(key.calculateHash(), key); + } + + byte key[] = args.getOptBytes("authorizationKey"); + if ( (key != null) && (key.length == Hash.HASH_LENGTH) ) { + authorizationPrivate = (SigningPrivateKey)signKeyHashes.get(new Hash(key)); + if (authorizationPrivate == null) { + ui.errorMessage("Authorization key w/ H()=" + Base64.encode(key) + " was not known for scope channel " + scopeChannel.toBase64() + " / " + targetChannel.toBase64() + " / " + nymId); + ui.errorMessage("Known hashes: " + signKeyHashes.keySet()); + return false; + } + } + + boolean unauthorized = false; + byte authenticationMask[] = null; + key = args.getOptBytes("authenticationKey"); + if ( (key != null) && (key.length == Hash.HASH_LENGTH) ) { + authenticationPrivate = (SigningPrivateKey)signKeyHashes.get(new Hash(key)); + if (authenticationPrivate == null) { + List authOnly = client.getNymKeys(client.getLoggedInNymId(), client.getPass(), null, null); + for (int i = 0; i < authOnly.size(); i++) { + NymKey nymKey = (NymKey)authOnly.get(i); + if (Constants.KEY_FUNCTION_POST.equals(nymKey.getFunction()) || + Constants.KEY_FUNCTION_MANAGE.equals(nymKey.getFunction())) { + SigningPrivateKey authPriv = new SigningPrivateKey(nymKey.getData()); + if (authPriv.calculateHash().equals(new Hash(key))) { + ui.debugMessage("Authenticating as a third party: " + client.ctx().keyGenerator().getSigningPublicKey(authPriv).calculateHash().toBase64().substring(0,6)); + authenticationPrivate = authPriv; + unauthorized = true; + break; + } + } + } + if (authenticationPrivate == null) { + ui.errorMessage("Authentication key w/ H()=" + Base64.encode(key) + " was not known"); + ui.errorMessage("Known hashes: " + signKeyHashes.keySet()); + return false; + } + } + if (!unauthorized) { + authenticationMask = new byte[Signature.SIGNATURE_BYTES]; + client.ctx().random().nextBytes(authenticationMask); + } + } + + Hash uriChannel = scopeChannel; + boolean bodyKeyIsPublic = false; + + if (postAsUnauthorized || args.getOptBoolean("postAsReply", false)) { + ui.debugMessage("creating a new body key (postAsUnaut? " + postAsUnauthorized + ", postAsReply? " + args.getOptBoolean("postAsReply", false) + ")"); + bodyKey = client.ctx().keyGenerator().generateSessionKey(); + if (!args.getOptBoolean("postAsReply", false)) + bodyKeyIsPublic = true; + if (authenticationPrivate != null) { + SigningPublicKey pub = client.ctx().keyGenerator().getSigningPublicKey(authenticationPrivate); + uriChannel = pub.calculateHash(); + } else if (authorizationPrivate != null) { + SigningPublicKey pub = client.ctx().keyGenerator().getSigningPublicKey(authorizationPrivate); + uriChannel = pub.calculateHash(); + } + } else { + int index = client.ctx().random().nextInt(readKeys.size()); + bodyKey = (SessionKey)readKeys.get(index); + bodyKeyIsPublic = false; + ui.debugMessage("using a known read key"); + } + + byte salt[] = null; + if ( (args.getOptValue("bodyPassphrase") != null) && (args.getOptValue("bodyPassphrasePrompt") != null) ) { + salt = new byte[32]; + client.ctx().random().nextBytes(salt); + SessionKey pbeKey = client.ctx().keyGenerator().generateSessionKey(salt, DataHelper.getUTF8(args.getOptValue("bodyPassphrase"))); + ui.debugMessage("Encrypting with PBE key " + Base64.encode(pbeKey.getData()) + " derived from " + args.getOptValue("bodyPassphrase") + " and salted with " + Base64.encode(salt)); + bodyKey = pbeKey; + } + + Map publicHeaders = generatePublicHeaders(client, ui, args, uriChannel, targetChannel, bodyKey, bodyKeyIsPublic, salt, postAsUnauthorized); + Map privateHeaders = generatePrivateHeaders(client, ui, args, targetChannel, authenticationPrivate, authenticationMask); + + String refStr = null; + String filename = args.getOptValue("refs"); + if (filename != null) { + refStr = readRefs(ui, filename); + ui.debugMessage("Reading refs from " + filename + ", came up with " + (refStr != null ? refStr.length() + " chars": "no file"));; + } + + String out = args.getOptValue("out"); + byte avatar[] = read(ui, args.getOptValue("avatar"), Constants.MAX_AVATAR_SIZE); + try { + byte zipped[] = prepareBody(args, ui, privateHeaders, refStr, avatar); + boolean written = writeMessage(client, ui, out, authorizationPrivate, authenticationPrivate, authenticationMask, to, bodyKey, publicHeaders, avatar, zipped); + if (!written) + return false; + else + return true; + } catch (IOException ioe) { + ui.errorMessage("Error writing the message", ioe); + return false; + } + } + + private boolean writeMessage(DBClient client, UI ui, String out, SigningPrivateKey authorizationPrivate, SigningPrivateKey authenticationPrivate, byte[] authenticationMask, PublicKey to, SessionKey bodyKey, Map pubHeaders, byte[] avatar, byte[] zipped) throws IOException { + FileOutputStream fos = null; + try { + byte encBody[] = null; + if (to == null) { + ui.debugMessage("Encrypting the message with the body key " + bodyKey.toBase64()); + encBody = encryptBody(client.ctx(), zipped, bodyKey); + } else { + ui.debugMessage("Encrypting the message to the reply key " + to.calculateHash().toBase64()); + encBody = encryptBody(client.ctx(), zipped, to); + } + fos = new FileOutputStream(out); + Sha256Standalone hash = new Sha256Standalone(); + DataHelper.write(fos, DataHelper.getUTF8(Constants.TYPE_CURRENT+"\n"), hash); + TreeSet ordered = new TreeSet(pubHeaders.keySet()); + for (Iterator iter = ordered.iterator(); iter.hasNext(); ) { + String key = (String)iter.next(); + String val = (String)pubHeaders.get(key); + DataHelper.write(fos, DataHelper.getUTF8(key + '=' + val + '\n'), hash); + } + DataHelper.write(fos, DataHelper.getUTF8("\nSize=" + encBody.length + "\n"), hash); + DataHelper.write(fos, encBody, hash); + + byte authorizationHash[] = ((Sha256Standalone)hash.clone()).digest(); // digest() reset()s + byte sig[] = null; + if (authorizationPrivate != null) { + sig = client.ctx().dsa().sign(new Hash(authorizationHash), authorizationPrivate).getData(); + } else { + sig = new byte[Signature.SIGNATURE_BYTES]; + client.ctx().random().nextBytes(sig); + } + ui.debugMessage("Authorization hash: " + Base64.encode(authorizationHash) + " sig: " + Base64.encode(sig)); + DataHelper.write(fos, DataHelper.getUTF8("AuthorizationSig=" + Base64.encode(sig) + "\n"), hash); + + byte authenticationHash[] = hash.digest(); + sig = null; + if (authenticationPrivate != null) { + sig = client.ctx().dsa().sign(new Hash(authenticationHash), authenticationPrivate).getData(); + if ( (authenticationMask != null) && (authorizationPrivate != null) ) + DataHelper.xor(sig, 0, authenticationMask, 0, sig, 0, sig.length); + } else { + sig = new byte[Signature.SIGNATURE_BYTES]; + client.ctx().random().nextBytes(sig); + } + ui.debugMessage("Authentication hash: " + Base64.encode(authenticationHash) + " sig: " + Base64.encode(sig)); + DataHelper.write(fos, DataHelper.getUTF8("AuthenticationSig=" + Base64.encode(sig) + "\n"), hash); + + fos.close(); + fos = null; + return true; + } catch (IOException ioe) { + ui.errorMessage("Error writing the message", ioe); + return false; + } finally { + if (fos != null) try { fos.close(); } catch (IOException ioe) {} + } + } + + /** + * zip up all of the data expected to be in the encrypted body + */ + private byte[] prepareBody(Opts args, UI ui, Map privateHeaders, String refsStr, byte avatar[]) throws IOException { + ByteArrayOutputStream baos = new ByteArrayOutputStream(4*1024); + ZipOutputStream zos = new ZipOutputStream(baos); + if ( (privateHeaders != null) && (privateHeaders.size() > 0) ) { + ZipEntry entry = new ZipEntry(EnclosureBody.ENTRY_HEADERS); + entry.setTime(0); + zos.putNextEntry(entry); + write(privateHeaders, zos); + zos.flush(); + zos.closeEntry(); + ui.debugMessage("Private headers included (size=" + privateHeaders.size() + ")"); + } else { + ui.debugMessage("Private headers NOT included"); + } + if ( (avatar != null) && (avatar.length > 0) ) { + ZipEntry entry = new ZipEntry(EnclosureBody.ENTRY_AVATAR); + entry.setTime(0); + entry.setSize(avatar.length); + zos.putNextEntry(entry); + zos.write(avatar); + zos.closeEntry(); + } + if (refsStr != null) { + ui.debugMessage("References string is " + refsStr.length() + " bytes long"); + ZipEntry entry = new ZipEntry(EnclosureBody.ENTRY_REFERENCES); + entry.setTime(0); + byte ref[] = DataHelper.getUTF8(refsStr); + entry.setSize(ref.length); + zos.putNextEntry(entry); + zos.write(ref); + zos.closeEntry(); + } else { + ui.debugMessage("No references included"); + } + + int page = 0; + while (true) { + String dataFile = args.getOptValue("page" + page); + String cfgFile = args.getOptValue("page" + page + "-config"); + if (dataFile != null) { + byte data[] = read(ui, dataFile, 256*1024); + if (data == null) + throw new IOException("Data for page " + page + " not found in " + dataFile); + + ZipEntry entry = new ZipEntry(EnclosureBody.ENTRY_PAGE_PREFIX + page + EnclosureBody.ENTRY_PAGE_DATA_SUFFIX); + entry.setTime(0); + entry.setSize(data.length); + zos.putNextEntry(entry); + zos.write(data); + zos.closeEntry(); + + if (cfgFile != null) { + data = read(ui, cfgFile, 32*1024); + if (data == null) + throw new IOException("Config for page " + page + " not found in " + cfgFile); + + entry = new ZipEntry(EnclosureBody.ENTRY_PAGE_PREFIX + page + EnclosureBody.ENTRY_PAGE_CONFIG_SUFFIX); + entry.setTime(0); + entry.setSize(data.length); + zos.putNextEntry(entry); + zos.write(data); + zos.closeEntry(); + } + + page++; + } else { + break; + } + } + + int attachment = 0; + while (true) { + String dataFile = args.getOptValue("attach" + attachment); + String cfgFile = args.getOptValue("attach" + attachment + "-config"); + if (dataFile != null) { + byte data[] = read(ui, dataFile, 256*1024); + if (data == null) + throw new IOException("Data for attachment " + attachment + " not found in " + dataFile); + + ZipEntry entry = new ZipEntry(EnclosureBody.ENTRY_ATTACHMENT_PREFIX + attachment + EnclosureBody.ENTRY_ATTACHMENT_DATA_SUFFIX); + entry.setTime(0); + entry.setSize(data.length); + zos.putNextEntry(entry); + zos.write(data); + zos.closeEntry(); + + if (cfgFile != null) { + data = read(ui, cfgFile, 32*1024); + if (data == null) + throw new IOException("Config for attachment " + attachment + " not found in " + cfgFile); + + entry = new ZipEntry(EnclosureBody.ENTRY_ATTACHMENT_PREFIX + attachment + EnclosureBody.ENTRY_ATTACHMENT_CONFIG_SUFFIX); + entry.setTime(0); + entry.setSize(data.length); + zos.putNextEntry(entry); + zos.write(data); + zos.closeEntry(); + } + + attachment++; + } else { + break; + } + } + + zos.close(); + + byte raw[] = baos.toByteArray(); + return raw; + } + private Map generatePublicHeaders(DBClient client, UI ui, Opts args, Hash channel, Hash targetChannel, SessionKey bodyKey, boolean bodyKeyIsPublic, byte salt[], boolean postAsUnauthorized) { + Map rv = new HashMap(); + if (args.getOptBoolean("postAsReply", false)) { + rv.put(Constants.MSG_HEADER_TYPE, Constants.MSG_TYPE_REPLY); + //if (!targetChannel.equals(channel)) + rv.put(Constants.MSG_HEADER_TARGET_CHANNEL, targetChannel.toBase64()); + } else { + rv.put(Constants.MSG_HEADER_TYPE, Constants.MSG_TYPE_POST); + if (!targetChannel.equals(channel)) + rv.put(Constants.MSG_HEADER_TARGET_CHANNEL, targetChannel.toBase64()); + } + + // tags + List tags = args.getOptValues("pubTag"); + if ( (tags != null) && (tags.size() > 0) ) { + StringBuffer buf = new StringBuffer(); + for (int i = 0; i < tags.size(); i++) + buf.append(strip((String)tags.get(i))).append('\t'); + rv.put(Constants.MSG_META_HEADER_TAGS, buf.toString()); + } + + long msgId = args.getOptLong("messageId", -1); + if (msgId < 0) { // YYYYMMDD+rand + long now = client.ctx().clock().now(); + now = now - (now % 24*60*60*1000); + now += client.ctx().random().nextLong(24*60*60*1000); + msgId = now; + } + rv.put(Constants.MSG_HEADER_POST_URI, strip(SyndieURI.createMessage(channel, msgId).toString())); + + //args.getOptBytes("author"); + + if ( (args.getOptValue("bodyPassphrase") != null) && (args.getOptValue("bodyPassphrasePrompt") != null) ) { + String passphrase = strip(args.getOptValue("bodyPassphrase")); + String prompt = strip(args.getOptValue("bodyPassphrasePrompt")); + rv.put(Constants.MSG_HEADER_PBE_PROMPT, prompt); + rv.put(Constants.MSG_HEADER_PBE_PROMPT_SALT, Base64.encode(salt)); + } else if ( (bodyKeyIsPublic) || + (!args.getOptBoolean("encryptContent", false) || postAsUnauthorized) && + (!args.getOptBoolean("postAsReply", false)) ) { + // if we are NOT trying to privately encrypt the content (or if we are acting as if + // we don't know the channel's read key(s)), then publicize the bodyKey in the public + // headers (so anyone can open the zip content and read the private headers/refs/avatar/etc) + rv.put(Constants.MSG_HEADER_BODYKEY, strip(bodyKey.toBase64())); + } + + ui.debugMessage("public headers: " + rv); + return rv; + } + private Map generatePrivateHeaders(DBClient client, UI ui, Opts args, Hash channel, SigningPrivateKey authenticationPrivate, byte authenticationMask[]) { + Map rv = new HashMap(); + + // tags + List tags = args.getOptValues("privTag"); + if ( (tags != null) && (tags.size() > 0) ) { + StringBuffer buf = new StringBuffer(); + for (int i = 0; i < tags.size(); i++) + buf.append(strip((String)tags.get(i))).append('\t'); + rv.put(Constants.MSG_META_HEADER_TAGS, buf.toString()); + } + + String referenceStrings = args.getOptValue("references"); + if (referenceStrings != null) { + StringBuffer refs = new StringBuffer(); + String refList[] = Constants.split(',', referenceStrings); + for (int i = 0; i < refList.length; i++) { + try { + SyndieURI uri = new SyndieURI(refList[i]); + refs.append(strip(uri.toString())); + refs.append('\t'); + } catch (URISyntaxException use) { + // invalid + ui.errorMessage("URI reference is not valid: " + refList[i], use); + } + } + rv.put(Constants.MSG_HEADER_REFERENCES, refs.toString()); + } + + String overwrite = args.getOptValue("overwrite"); + if (overwrite != null) { + try { + SyndieURI uri = new SyndieURI(overwrite); + rv.put(Constants.MSG_HEADER_OVERWRITE, strip(uri.toString())); + } catch (URISyntaxException use) { + ui.debugMessage("Overwrite URI is not valid: " + overwrite, use); + } + } + + if (args.getOptBoolean("forceNewThread", false)) + rv.put(Constants.MSG_HEADER_FORCE_NEW_THREAD, Boolean.TRUE.toString()); + + if (args.getOptBoolean("refuseReplies", false)) + rv.put(Constants.MSG_HEADER_REFUSE_REPLIES, Boolean.TRUE.toString()); + + List cancel = args.getOptValues("cancel"); + if (cancel != null) { + StringBuffer refs = new StringBuffer(); + for (int i = 0; i < cancel.size(); i++) { + String ref = (String)cancel.get(i); + try { + SyndieURI uri = new SyndieURI(ref); + refs.append(strip(uri.toString())); + refs.append('\t'); + } catch (URISyntaxException use) { + // invalid + ui.debugMessage("Cancelled URI reference is not valid: " + ref, use); + } + } + rv.put(Constants.MSG_HEADER_CANCEL, refs.toString()); + } + + String val = args.getOptValue("subject"); + if (val != null) + rv.put(Constants.MSG_HEADER_SUBJECT, strip(val)); + + String expiration = args.getOptValue("expiration"); + if (val != null) + rv.put(Constants.MSG_HEADER_EXPIRATION, strip(expiration)); + + if (authenticationPrivate != null) { + SigningPublicKey ident = client.ctx().keyGenerator().getSigningPublicKey(authenticationPrivate); + rv.put(Constants.MSG_HEADER_AUTHOR, ident.calculateHash().toBase64()); + if (authenticationMask != null) + rv.put(Constants.MSG_HEADER_AUTHENTICATION_MASK, Base64.encode(authenticationMask)); + } + + rv.put(Constants.MSG_HEADER_TARGET_CHANNEL, channel.toBase64()); + + ui.debugMessage("private headers: " + rv); + return rv; + } + + /** + * default the $channel and $authenticationKey to the nym's blog, the $authorizationKey + * to the nym's blog (or if the nym has a post or manage key for the target channel, + * one of those keys), default --encryptContent to true if a readKey is known + */ + private boolean updateSimpleArgs(DBClient client, UI ui, long nymId, Opts args) { + List keys = client.getNymKeys(nymId, client.getPass(), null, null); + Hash channel = null; + Hash authenticationKey = null; + Hash authorizationKey = null; + SessionKey readKey = null; + + byte chan[] = args.getOptBytes("scopeChannel"); + if (chan == null) { + for (int i = 0; i < keys.size(); i++) { + NymKey key = (NymKey)keys.get(i); + if (Constants.KEY_FUNCTION_MANAGE.equals(key.getFunction())) { + if (channel == null) { + SigningPrivateKey k = new SigningPrivateKey(key.getData()); + SigningPublicKey pub = client.ctx().keyGenerator().getSigningPublicKey(k); + channel = pub.calculateHash(); + } else { + ui.errorMessage("Cannot use simple mode, as no channel was specified but multiple management keys are known"); + channel = null; + return false; + } + } + } + } else { + channel = new Hash(chan); + } + + if (channel == null) + return false; + + byte k[] = args.getOptBytes("authenticationKey"); + if (k != null) + authenticationKey = new Hash(k); + k = args.getOptBytes("authorizationKey"); + if (k != null) + authorizationKey = new Hash(k); + + for (int i = 0; i < keys.size(); i++) { + NymKey key = (NymKey)keys.get(i); + if (Constants.KEY_FUNCTION_MANAGE.equals(key.getFunction())) { + SigningPrivateKey priv = new SigningPrivateKey(key.getData()); + SigningPublicKey pub = client.ctx().keyGenerator().getSigningPublicKey(priv); + Hash privChan = pub.calculateHash(); + // take the first authentication key found, as its probably our blog + if (authenticationKey == null) + authenticationKey = priv.calculateHash(); + // take the authorization key associated with the target chan + if ((authorizationKey == null) && privChan.equals(channel)) + authorizationKey = priv.calculateHash(); + } else if (Constants.KEY_FUNCTION_READ.equals(key.getFunction())) { + if (key.getChannel().equals(channel)) + readKey = new SessionKey(key.getData()); + } + } + //if ( (authenticationKey != null) && (authorizationKey == null) ) + // authorizationKey = authenticationKey; // self-authorized, may not be sufficient + + if ( (readKey == null) && (channel != null) ) { + List read = client.getReadKeys(channel, nymId, client.getPass()); + if ( (read != null) && (read.size() > 0) ) { + int index = client.ctx().random().nextInt(read.size()); + readKey = (SessionKey)read.get(index); + } + } + + if ( (authenticationKey != null) && + //(authorizationKey != null) && + (channel != null) ) { + // ok, found what we need + List chans = args.getOptValues("targetChannel"); + if ( (chans == null) || (chans.size() <= 0) ) + args.addOptValue("targetChannel", channel.toBase64()); + else + chans.add(channel.toBase64()); + chans = args.getOptValues("scopeChannel"); + if ( (chans == null) || (chans.size() <= 0) ) + args.addOptValue("scopeChannel", channel.toBase64()); + else + chans.add(channel.toBase64()); + + keys = args.getOptValues("authenticationKey"); + if ( (keys == null) || (keys.size() <= 0) ) + args.addOptValue("authenticationKey", authenticationKey.toBase64()); + else + keys.add(authenticationKey.toBase64()); + + if (authorizationKey != null) { + keys = args.getOptValues("authorizationKey"); + if ( (keys == null) || (keys.size() <= 0) ) + args.addOptValue("authorizationKey", authorizationKey.toBase64()); + else + keys.add(authorizationKey.toBase64()); + } + + if ( (readKey != null) && (args.getOptValue("encryptContent") == null) ) + args.setOptValue("encryptContent", "true"); + return true; + } else { + ui.errorMessage("Auth keys not found, cant use simple mode"); + return false; + } + } + + public static void omain(String args[]) { + try { + CLI.main(new String[] { "messagegen", + "--db", "jdbc:hsqldb:file:/tmp/cli", + "--login", "j", + "--pass", "j", + "--postAsReply", "true", + "--channel", "2klF2vDob7M82j8ZygZ-s9LmOHfaAdso5V0DzLvHISI=", + "--page0", "/etc/passwd", "--page0-config", "/dev/null", + "--attach0", "/etc/resolv.conf", "--attach0-config", "/dev/null", + "--authenticationKey", "bOdorbv8kVon7dEHHaFzuhz8qNMfX9Izcrh-rzZ0x6U=", + "--authorizationKey", "bOdorbv8kVon7dEHHaFzuhz8qNMfX9Izcrh-rzZ0x6U=", + "--simple", "false", + "--out", "/tmp/messageOut" + }); + } catch (Exception e) { e.printStackTrace(); } + } + // example of the scriptability: + /* + public static void main(String args[]) { + TextUI ui = new TextUI(true); + ui.insertCommand("login --db jdbc:hsqldb:file:/tmp/textui --login j --pass j"); + ui.insertCommand("menu read"); + ui.insertCommand("channels"); + ui.insertCommand("messages --channel 0"); + ui.insertCommand("view --message 4"); + TextEngine engine = new TextEngine(ui); + engine.run(); + } + */ + public static void main(String args[]) { + try { + CLI.main(new String[] { "messagegen", + "--db", "jdbc:hsqldb:file:/tmp/cli", + "--login", "bar", + "--pass", "bar", + "--page0", "/etc/passwd", "--page0-config", "/dev/null", + "--attach0", "/etc/resolv.conf", "--attach0-config", "/dev/null", + "--simple", "true", + "--out", "/tmp/simpleOut" + }); + } catch (Exception e) { e.printStackTrace(); } + } +} diff --git a/src/syndie/db/MessageList.java b/src/syndie/db/MessageList.java new file mode 100644 index 0000000..82f32d7 --- /dev/null +++ b/src/syndie/db/MessageList.java @@ -0,0 +1,84 @@ +package syndie.db; + +import java.io.File; +import java.sql.SQLException; +import java.util.*; +import net.i2p.I2PAppContext; +import net.i2p.data.Hash; + +/** + *CLI messagelist + * --db $url + * --channel $base64(channelHash) + */ +public class MessageList extends CommandImpl { + MessageList() {} + public DBClient runCommand(Opts args, UI ui, DBClient client) { + if ( (client == null) || (!client.isLoggedIn()) ) { + List missing = args.requireOpts(new String[] { "db", "channel" }); + if (missing.size() > 0) { + ui.errorMessage("Invalid options, missing " + missing); + ui.commandComplete(-1, null); + return client; + } + } else { + List missing = args.requireOpts(new String[] { "channel" }); + if (missing.size() > 0) { + ui.errorMessage("Invalid options, missing " + missing); + ui.commandComplete(-1, null); + return client; + } + } + + try { + if (args.dbOptsSpecified()) { + if (client == null) + client = new DBClient(I2PAppContext.getGlobalContext(), new File(TextEngine.getRootPath())); + else + client.close(); + client.connect(args.getOptValue("db")); + } + Hash chan = new Hash(args.getOptBytes("channel")); + ui.statusMessage("Channel " + chan.toBase64()); + List internalIds = client.getMessageIdsPrivate(chan); + if (internalIds.size() > 0) { + ui.statusMessage("Private messages available: "); + for (int i = 0; i < internalIds.size(); i++) + ui.statusMessage("\tmessage " + internalIds.get(i)); + } + internalIds = client.getMessageIdsAuthorized(chan); + if (internalIds.size() > 0) { + ui.statusMessage("Authorized messages available: "); + for (int i = 0; i < internalIds.size(); i++) + ui.statusMessage("\tmessage " + internalIds.get(i)); + } + internalIds = client.getMessageIdsAuthenticated(chan); + if (internalIds.size() > 0) { + ui.statusMessage("Authenticated yet unauthorized messages available: "); + for (int i = 0; i < internalIds.size(); i++) + ui.statusMessage("\tmessage " + internalIds.get(i)); + } + internalIds = client.getMessageIdsUnauthenticated(chan); + if (internalIds.size() > 0) { + ui.statusMessage("Unauthenticated and unauthorized messages available: "); + for (int i = 0; i < internalIds.size(); i++) + ui.statusMessage("\tmessage " + internalIds.get(i)); + } + } catch (SQLException se) { + ui.errorMessage("Invalid database URL", se); + //} finally { + // if (client != null) client.close(); + } + return client; + } + + public static void main(String args[]) { + try { + CLI.main(new String[] { "messagelist", + "--db", "jdbc:hsqldb:file:/tmp/cli", + "--login", "j", + "--pass", "j", + "--channel", "2klF2vDob7M82j8ZygZ-s9LmOHfaAdso5V0DzLvHISI=" }); + } catch (Exception e) { e.printStackTrace(); } + } +} diff --git a/src/syndie/db/MessageReferenceBuilder.java b/src/syndie/db/MessageReferenceBuilder.java new file mode 100644 index 0000000..3c2ea9d --- /dev/null +++ b/src/syndie/db/MessageReferenceBuilder.java @@ -0,0 +1,136 @@ +package syndie.db; + +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.util.*; +import net.i2p.data.Hash; +import syndie.data.ReferenceNode; +import syndie.data.SyndieURI; + +/** + * Walk through the database and build a tree of references from a given message + */ +class MessageReferenceBuilder { + private DBClient _client; + private Map _referenceIdToReferenceNode; + + public MessageReferenceBuilder(DBClient client) { + _client = client; + _referenceIdToReferenceNode = new TreeMap(); + } + + /** get the reference trees from the given message + */ + public List loadReferences(long internalMsgId) throws SQLException { + buildReferences(internalMsgId); + resolveTree(); + List rv = new ArrayList(); + for (Iterator iter = _referenceIdToReferenceNode.values().iterator(); iter.hasNext(); ) { + ReferenceNode node = (ReferenceNode)iter.next(); + if (node.getParent() == null) + rv.add(node); + } + _referenceIdToReferenceNode.clear(); + return rv; + } + + private static final String SQL_GET_MESSAGE_REFERENCE = "SELECT referenceId, parentReferenceId, siblingOrder, name, description, uriId, refType FROM messageReference WHERE msgId = ? ORDER BY referenceId ASC"; + /* + CREATE CACHED TABLE messageReference ( + msgId BIGINT NOT NULL + -- referenceId is unique within the msgId scope + , referenceId INTEGER NOT NULL + , parentReferenceId INTEGER NOT NULL + , siblingOrder INTEGER NOT NULL + , name VARCHAR(128) + , description VARCHAR(512) + , uriId BIGINT + , refType VARCHAR(64) + , PRIMARY KEY (msgId, referenceId) + , UNIQUE (msgId, parentReferenceId, siblingOrder) + ); + */ + private void buildReferences(long msgId) throws SQLException { + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _client.con().prepareStatement(SQL_GET_MESSAGE_REFERENCE); + stmt.setLong(1, msgId); + rs = stmt.executeQuery(); + while (rs.next()) { + // referenceId, parentReferenceId, siblingOrder, name, description, uriId, refType + int refId = rs.getInt(1); + if (rs.wasNull()) continue; + int parentId = rs.getInt(2); + if (rs.wasNull()) parentId = -1; + int order = rs.getInt(3); + if (rs.wasNull()) order = 0; + String name = rs.getString(4); + String desc = rs.getString(5); + long uriId = rs.getLong(6); + if (rs.wasNull()) uriId = -1; + String refType = rs.getString(7); + + SyndieURI uri = _client.getURI(uriId); + MsgReferenceNode node = new MsgReferenceNode(name, uri, desc, refType, refId, parentId, order); + _referenceIdToReferenceNode.put(new Integer(refId), node); + } + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + + private void resolveTree() { + setParents(); + orderChildren(); + } + + private void setParents() { + for (Iterator iter = _referenceIdToReferenceNode.values().iterator(); iter.hasNext(); ) { + MsgReferenceNode node = (MsgReferenceNode)iter.next(); + if (node.getParentReferenceId() >= 0) { + MsgReferenceNode parent = (MsgReferenceNode)_referenceIdToReferenceNode.get(new Integer(node.getParentReferenceId())); + if (parent != null) { + node.setParent(parent); + parent.addChild(node); + } + } + } + } + private void orderChildren() { + for (Iterator iter = _referenceIdToReferenceNode.values().iterator(); iter.hasNext(); ) { + MsgReferenceNode node = (MsgReferenceNode)iter.next(); + node.orderChildren(); + } + } + + private class MsgReferenceNode extends ReferenceNode { + private int _referenceId; + private int _parentReferenceId; + private int _siblingOrder; + public MsgReferenceNode(String name, SyndieURI uri, String description, String type, int refId, int parentId, int order) { + super(name, uri, description, type); + _referenceId = refId; + _parentReferenceId = parentId; + _siblingOrder = order; + } + public int getReferenceId() { return _referenceId; } + public int getParentReferenceId() { return _parentReferenceId; } + public int getSiblingOrder() { return _siblingOrder; } + public void setParent(MsgReferenceNode node) { _parent = node; } + public void orderChildren() { + TreeMap ordered = new TreeMap(); + for (int i = 0; i < _children.size(); i++) { + MsgReferenceNode child = (MsgReferenceNode)_children.get(i); + ordered.put(new Integer(child.getSiblingOrder()), child); + } + _children.clear(); + for (Iterator iter = ordered.values().iterator(); iter.hasNext(); ) { + MsgReferenceNode child = (MsgReferenceNode)iter.next(); + addChild(child); // adjusts the child's tree index too + } + } + } +} diff --git a/src/syndie/db/MessageThreadBuilder.java b/src/syndie/db/MessageThreadBuilder.java new file mode 100644 index 0000000..100942e --- /dev/null +++ b/src/syndie/db/MessageThreadBuilder.java @@ -0,0 +1,354 @@ +package syndie.db; + +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.util.*; +import net.i2p.I2PAppContext; +import net.i2p.data.Hash; +import syndie.data.ChannelInfo; +import syndie.data.MessageInfo; +import syndie.data.ReferenceNode; +import syndie.data.SyndieURI; + +/** + * Walk through the message database and build a tree of messages. note that + * there is currently a bug where a single thread built from different points + * can result in a tree with branches in a different order. so, to keep it + * consistent, either don't always rebuild the tree, or always build it from + * the root. or, of course, improve the algorithm so that it has a single + * canonical form. + */ +class MessageThreadBuilder { + private DBClient _client; + private UI _ui; + private Map _uriToReferenceNode; + private List _pendingURI; + private ThreadedReferenceNode _root; + + public MessageThreadBuilder(DBClient client, UI ui) { + _client = client; + _ui = ui; + _uriToReferenceNode = new HashMap(); + _pendingURI = new ArrayList(); + } + + /** + * build the full tree that the given message is a part of, returning + * the root. each node has the author's preferred name stored in node.getName() + * and the message subject in node.getDescription(), with the message URI in + * node.getURI(). + */ + public ReferenceNode buildThread(MessageInfo msg) { + long chanId = msg.getScopeChannelId(); + ChannelInfo chan = _client.getChannel(chanId); + long msgId = msg.getMessageId(); + if ( (chan != null) && (msgId >= 0) ) + _pendingURI.add(SyndieURI.createMessage(chan.getChannelHash(), msgId)); + while (_pendingURI.size() > 0) + processNextMessage(); + buildTree(); + return _root; + } + private void processNextMessage() { + SyndieURI uri = (SyndieURI)_pendingURI.remove(0); + if ( (uri.getScope() == null) || (uri.getMessageId() == null) ) + return; + Hash chan = uri.getScope(); + String subject = null; + String authorName = null; + List parentURIs = null; + List childURIs = null; + long chanId = _client.getChannelId(uri.getScope()); + if (chanId >= 0) { + ChannelInfo chanInfo = _client.getChannel(chanId); + if (chanInfo != null) + authorName = chanInfo.getName(); + else + authorName = uri.getScope().toBase64().substring(0,6); + + MessageInfo msg = _client.getMessage(chanId, uri.getMessageId()); + if (msg != null) { + subject = msg.getSubject(); + if (!msg.getForceNewThread()) { + parentURIs = getParentURIs(msg.getInternalId()); + enqueue(parentURIs); + } + if (!msg.getRefuseReplies()) { + childURIs = getChildURIs(chan, msg.getMessageId()); + enqueue(childURIs); + } + } + } else { + authorName = uri.getScope().toBase64().substring(0,6); + } + + ThreadedReferenceNode node = new ThreadedReferenceNode(authorName, uri, subject); + node.setHistory(parentURIs, childURIs); + _uriToReferenceNode.put(uri, node); + } + + private void enqueue(List uris) { + for (int i = 0; i < uris.size(); i++) { + SyndieURI uri = (SyndieURI)uris.get(i); + if (_pendingURI.contains(uri)) { + // already pending, noop + } else if (!_uriToReferenceNode.containsKey(uri)) { + _pendingURI.add(uri); + } else { + ReferenceNode ref = (ReferenceNode)_uriToReferenceNode.get(uri); + if (ref.getURI() == null) // only known by reference, not yet pending + _pendingURI.add(uri); + } + } + } + + private static final String SQL_GET_PARENT_URIS = "SELECT referencedChannelHash, referencedMessageId FROM messageHierarchy WHERE msgId = ? ORDER BY referencedCloseness ASC, msgId DESC"; + /* CREATE CACHED TABLE messageHierarchy ( + * msgId BIGINT + * -- refers to a targetChannelId + * , referencedChannelHash VARBINARY(32) + * , referencedMessageId BIGINT + * -- how far up the tree is the referenced message? parent has a closeness of 1, + * -- grandparent has a closeness of 2, etc. does not necessarily have to be exact, + * -- merely relative + * , referencedCloseness INTEGER DEFAULT 1 + * , PRIMARY KEY (msgId, referencedCloseness) + * ); + * + */ + private List getParentURIs(long msgId) { + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = _client.con().prepareStatement(SQL_GET_PARENT_URIS); + stmt.setLong(1, msgId); + rs = stmt.executeQuery(); + List rv = new ArrayList(); + while (rs.next()) { + byte chan[] = rs.getBytes(1); + long chanMsg = rs.getLong(2); + if (rs.wasNull()) + chanMsg = -1; + if ( (chan != null) && (chan.length == Hash.HASH_LENGTH) && (chanMsg >= 0) ) + rv.add(SyndieURI.createMessage(new Hash(chan), chanMsg)); + } + return rv; + } catch (SQLException se) { + _ui.errorMessage("Error retrieving parent URIs", se); + return Collections.EMPTY_LIST; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + + private static final String SQL_GET_CHILD_URIS = "SELECT msgId FROM messageHierarchy WHERE referencedChannelHash = ? AND referencedMessageId = ? ORDER BY referencedCloseness ASC, msgId DESC"; + /* CREATE CACHED TABLE messageHierarchy ( + * msgId BIGINT + * -- refers to a targetChannelId + * , referencedChannelHash VARBINARY(32) + * , referencedMessageId BIGINT + * -- how far up the tree is the referenced message? parent has a closeness of 1, + * -- grandparent has a closeness of 2, etc. does not necessarily have to be exact, + * -- merely relative + * , referencedCloseness INTEGER DEFAULT 1 + * , PRIMARY KEY (msgId, referencedCloseness) + * ); + * + */ + private List getChildURIs(Hash channel, long messageId) { + PreparedStatement stmt = null; + _client.getMessageIdsAuthenticated(channel); + ResultSet rs = null; + try { + stmt = _client.con().prepareStatement(SQL_GET_CHILD_URIS); + stmt.setBytes(1, channel.getData()); + stmt.setLong(2, messageId); + rs = stmt.executeQuery(); + List rv = new ArrayList(); + while (rs.next()) { + long internalMsgId = rs.getLong(1); + if (!rs.wasNull()) { + MessageInfo msg = _client.getMessage(internalMsgId); + if (msg != null) + rv.add(msg.getURI()); + } + } + return rv; + } catch (SQLException se) { + _ui.errorMessage("Error retrieving child URIs", se); + return Collections.EMPTY_LIST; + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + + private void buildTree() { + for (Iterator iter = _uriToReferenceNode.values().iterator(); iter.hasNext(); ) { + ThreadedReferenceNode node = (ThreadedReferenceNode)iter.next(); + buildTree(node); + } + pruneEmpty(); + reindexTree(); // walk through the + } + private void buildTree(ThreadedReferenceNode node) { + ThreadedReferenceNode cur = node; + List parents = node.getParentURIs(); + if (parents != null) { + //_ui.debugMessage("building tree for " + node.getURI() + ": parents: " + parents); + for (int i = 0; i < parents.size(); i++) { + SyndieURI uri = (SyndieURI)parents.get(i); + ThreadedReferenceNode parent = (ThreadedReferenceNode)_uriToReferenceNode.get(uri); + if (parent == null) { + parent = new ThreadedReferenceNode(null, uri, null); + _uriToReferenceNode.put(uri, parent); + } + if (cur.getParent() == null) + parent.addChild(cur); + cur = parent; + } + } + } + private void pruneEmpty() { + for (Iterator iter = _uriToReferenceNode.keySet().iterator(); iter.hasNext(); ) { + SyndieURI uri = (SyndieURI)iter.next(); + ThreadedReferenceNode node = (ThreadedReferenceNode)_uriToReferenceNode.get(uri); + if (node.getName() == null) { + _ui.debugMessage("dummy node, parent=" + node.getParent() + " kids: " + node.getChildCount()); + // dummy node + if (node.getParent() == null) { + // we are at the root, so don't pull up any kids (unless there's just one) + if (node.getChildCount() == 1) { + ThreadedReferenceNode child = (ThreadedReferenceNode)node.getChild(0); + child.setParent(null); + iter.remove(); + } else { + if (_root != null) { + _ui.debugMessage("Corrupt threading, multiple roots"); + _ui.debugMessage("Current root: " + _root.getURI()); + _ui.debugMessage("New root: " + node.getURI()); + } + _root = node; + } + } else { + // pull up the children + _ui.debugMessage("Pulling up the " + node.getChildCount() + " children"); + ThreadedReferenceNode parent = (ThreadedReferenceNode)node.getParent(); + for (int i = 0; i < node.getChildCount(); i++) { + ThreadedReferenceNode child = (ThreadedReferenceNode)node.getChild(i); + parent.addChild(child); + } + iter.remove(); + } + } else { + if (node.getParent() == null) { + if (_root != null) { + _ui.debugMessage("Corrupt threading, multiple roots"); + _ui.debugMessage("Current root: " + _root.getURI()); + _ui.debugMessage("New root: " + node.getURI()); + } + _root = node; + } + } + } + } + private void reindexTree() { + List roots = new ArrayList(1); + roots.add(_root); + ThreadWalker walker = new ThreadWalker(_ui); + ReferenceNode.walk(roots, walker); + } + + private class ThreadWalker implements ReferenceNode.Visitor { + private UI _ui; + public ThreadWalker(UI ui) { _ui = ui; } + public void visit(ReferenceNode node, int indent, int siblingOrder) { + SyndieURI uri = node.getURI(); + if (uri == null) return; + Hash channel = uri.getScope(); + Long msgId = uri.getMessageId(); + if ( (channel == null) || (msgId == null) ) return; + ThreadedReferenceNode tnode = (ThreadedReferenceNode)node; + String oldIndex = tnode.getTreeIndex(); + if (tnode.getParent() == null) + tnode.setTreeIndex("" + (siblingOrder+1)); + else + tnode.setTreeIndex(tnode.getParent().getTreeIndex() + "." + (siblingOrder+1)); + String newIndex = tnode.getTreeIndex(); + if (!newIndex.equals(oldIndex)) + _ui.debugMessage("Reindexing " + oldIndex + " to " + newIndex); + } + } + + private class ThreadedReferenceNode extends ReferenceNode { + private List _parentURIs; + private List _childURIs; + public ThreadedReferenceNode(String name, SyndieURI uri, String description) { + super(name, uri, description, null); + _parentURIs = new ArrayList(); + _childURIs = new ArrayList(); + } + public void setHistory(List parentURIs, List childURIs) { + if (parentURIs != null) { + _parentURIs = parentURIs; + } else { + if ( (_parentURIs == null) || (_parentURIs.size() > 0) ) + _parentURIs = new ArrayList(); + } + if (childURIs != null) { + _childURIs = childURIs; + } else { + if ( (_childURIs == null) || (_childURIs.size() > 0) ) + _childURIs = new ArrayList(); + } + } + public List getParentURIs() { return _parentURIs; } + public List getChildURIs() { return _childURIs; } + public void setParent(ThreadedReferenceNode node) { _parent = node; } + public void setTreeIndex(String index) { _treeIndex = index; } + } + + public static void main(String args[]) { + TextUI ui = new TextUI(true); + final TextEngine te = new TextEngine("/tmp/cleandb", ui); + ui.insertCommand("login"); + te.runStep(); + MessageThreadBuilder mtb = new MessageThreadBuilder(te.getClient(), ui); + MessageInfo onetwotwo = te.getClient().getMessage(4); + ReferenceNode onetwotwoThread = mtb.buildThread(onetwotwo); + walk(onetwotwoThread, ui); + ui.debugMessage("built from " + onetwotwo.getScopeChannel().toBase64().substring(0,6) + ":" + onetwotwo.getMessageId()); + + mtb = new MessageThreadBuilder(te.getClient(), ui); + MessageInfo onetwo = te.getClient().getMessage(10); + ReferenceNode onetwoThread = mtb.buildThread(onetwo); + walk(onetwoThread, ui); + ui.debugMessage("built from " + onetwo.getScopeChannel().toBase64().substring(0,6) + ":" + onetwo.getMessageId()); + + ui.insertCommand("exit"); + te.runStep(); + } + + private static void walk(ReferenceNode root, UI ui) { + List roots = new ArrayList(1); + roots.add(root); + ThreadView walker = new ThreadView(ui); + ui.statusMessage("Thread: "); + ReferenceNode.walk(roots, walker); + } + + private static class ThreadView implements ReferenceNode.Visitor { + private UI _ui; + public ThreadView(UI ui) { _ui = ui; } + public void visit(ReferenceNode node, int indent, int siblingOrder) { + SyndieURI uri = node.getURI(); + if (uri == null) return; + Hash channel = uri.getScope(); + Long msgId = uri.getMessageId(); + if ( (channel == null) || (msgId == null) ) return; + _ui.debugMessage("Visiting " + node.getTreeIndex() + ": " + channel.toBase64().substring(0,6) + ":" + msgId); + } + } +} diff --git a/src/syndie/db/NestedGobbleUI.java b/src/syndie/db/NestedGobbleUI.java new file mode 100644 index 0000000..3a9e33b --- /dev/null +++ b/src/syndie/db/NestedGobbleUI.java @@ -0,0 +1,16 @@ +package syndie.db; + +import java.util.List; + +/** + * Gobble up any normal status messages (but still display error messages, + * as well as debug messages, if configured to do so) + * + */ +public class NestedGobbleUI extends NestedUI { + public NestedGobbleUI(UI real) { super(real); } + public void statusMessage(String msg) { debugMessage(msg); } + public Opts readCommand() { return super.readCommand(false); } + protected void displayPrompt() { System.out.println("nested displayPrompt"); } + public void commandComplete(int status, List location) {} +} diff --git a/src/syndie/db/NestedUI.java b/src/syndie/db/NestedUI.java new file mode 100644 index 0000000..1588b74 --- /dev/null +++ b/src/syndie/db/NestedUI.java @@ -0,0 +1,27 @@ +package syndie.db; + +import java.util.List; + +/** + */ +public class NestedUI implements UI { + protected UI _real; + private int _exit; + public NestedUI(UI real) { _real = real; _exit = 0; } + public int getExitCode() { return _exit; } + public Opts readCommand() { return _real.readCommand(); } + public Opts readCommand(boolean displayPrompt) { return _real.readCommand(displayPrompt); } + public void errorMessage(String msg) { _real.errorMessage(msg); } + public void errorMessage(String msg, Exception cause) { _real.errorMessage(msg, cause); } + public void statusMessage(String msg) { _real.statusMessage(msg); } + public void debugMessage(String msg) { _real.debugMessage(msg); } + public void debugMessage(String msg, Exception cause) { _real.debugMessage(msg, cause); } + public void commandComplete(int status, List location) { + _exit = status; + // don't propogate the command completion, as we are nested + } + public boolean toggleDebug() { return _real.toggleDebug(); } + public boolean togglePaginate() { return _real.togglePaginate(); } + public void insertCommand(String cmd) { _real.insertCommand(cmd); } + public String readStdIn() { return _real.readStdIn(); } +} diff --git a/src/syndie/db/Opts.java b/src/syndie/db/Opts.java new file mode 100644 index 0000000..fe01b42 --- /dev/null +++ b/src/syndie/db/Opts.java @@ -0,0 +1,266 @@ +package syndie.db; + +import java.util.*; +import net.i2p.data.Base64; + +/** + */ +public class Opts { + private String _command; + private Map _opts; + private List _args; + private int _size; + private boolean _parseOk; + private String _origLine; + + /** + * Parse out a list of string[]s into a multivalued mapping of 0 or more (--name value) + * options, followed by a list of 0 or more arguments. the options end when an option + * doesn't begin with "--" or when an option has no name (e.g. "--opt1 val1 -- arg1") + */ + public Opts(String cmd, String args[]) { + _command = cmd; + _parseOk = parse(args); + } + public Opts(Opts old) { + _command = old._command; + _opts = new HashMap(old._opts); + _args = new ArrayList(old._args); + _size = old._size; + _parseOk = old._parseOk; + } + public Opts() { + _command = null; + _opts = new HashMap(); + _args = new ArrayList(); + _size = 0; + _parseOk = true; + } + /** + * @param line unparsed command line (starting with the command to be run) + */ + public Opts(String line) { + this(); + _origLine = line; + List elements = splitLine(line); + + if (elements.size() > 0) { + _command = (String)elements.get(0); + if (elements.size() > 1) { + String elems[] = new String[elements.size()-1]; + for (int i = 0; i < elems.length; i++) + elems[i] = (String)elements.get(i+1); + _parseOk = parse(elems); + } + } + } + public boolean parse(String args[]) { + _opts = new HashMap(); + _args = new ArrayList(); + if (args == null) return false; + int argBegin = args.length; + try { + for (int i = 0; i < argBegin; i+=2) { + if (args[i].equals("--")) { + argBegin = i+1; + continue; + } else if (args[i].startsWith("--")) { + String arg = args[i].substring("--".length()); + if (i+1 >= args.length) { + _opts.clear(); + _args.clear(); + _size = 0; + return false; + } + String param = args[i+1]; + List vals = (List)_opts.get(arg); + if (vals == null) + vals = new ArrayList(); + vals.add(param); + _opts.put(arg, vals); + _size++; + } else { + argBegin = i; + } + } + for (int i = argBegin; i < args.length; i++) { + _args.add(args[i]); + _size++; + } + return true; + } catch (ArrayIndexOutOfBoundsException e) { + return false; + } + } + public boolean getParseOk() { return _parseOk; } + public String getCommand() { return _command; } + public void setCommand(String cmd) { _command = cmd; } + public String getOrigLine() { return _origLine; } + public Set getOptNames() { return new HashSet(_opts.keySet()); } + public String getOptValue(String name) { + List vals = (List)_opts.get(name); + if ( (vals != null) && (vals.size() > 0) ) + return (String)vals.get(0); + else + return null; + } + public List getOptValues(String name) { return (List)_opts.get(name); } + public boolean getOptBoolean(String name, boolean defaultValue) { + String val = getOptValue(name); + if (val == null) + return defaultValue; + else + return Boolean.valueOf(val).booleanValue(); + } + public long getOptLong(String name, long defaultValue) { + String val = getOptValue(name); + if (val == null) { + return defaultValue; + } else { + try { + return Long.parseLong(val); + } catch (NumberFormatException nfe) { + return defaultValue; + } + } + } + public byte[] getOptBytes(String name) { + String val = getOptValue(name); + if (val == null) { + return null; + } else { + return Base64.decode(val); + } + } + public List getArgs() { return _args; } + public String getArg(int index) { + if ( (index >= 0) && (index < _args.size()) ) + return (String)_args.get(index); + return null; + } + public int size() { return _size; } + /** return list of missing options, or an empty list if we have all of the required options */ + public List requireOpts(String opts[]) { + List missing = new ArrayList(); + for (int i = 0; i < opts.length; i++) { + if (!_opts.containsKey(opts[i])) + missing.add(opts[i]); + } + return missing; + } + + public void setOptValue(String name, String val) { addOptValue(name, val); } + public void addOptValue(String name, String val) { + if ( (val == null) || (name == null) ) return; + List vals = getOptValues(name); + if (vals == null) { + vals = new ArrayList(); + _opts.put(name, vals); + } + vals.add(val); + } + public void addArg(String val) { + if (_args == null) _args = new ArrayList(); + _args.add(val); + _size++; + } + public boolean dbOptsSpecified() { + return ( (getOptValue("db") != null) && + (getOptValue("login") != null) && + (getOptValue("pass") != null)); + } + + public String toString() { + StringBuffer buf = new StringBuffer(); + for (Iterator iter = _opts.keySet().iterator(); iter.hasNext(); ) { + String name = (String)iter.next(); + String val = getOptValue(name); + buf.append(name).append('=').append(val).append('\t'); + } + return buf.toString(); + } + + public static void main(String args[]) { + System.out.println(splitLine(" hi how are you?").toString()); + System.out.println(splitLine("I am fine, thanks! ").toString()); + System.out.println(splitLine("What you \"up to\" g?").toString()); + System.out.println(splitLine("\"y\'all had best answer\" me").toString()); + System.out.println(splitLine("a \"\" val \"\"")); + // note: fails to parse this correctly (includes '\"you' and '\"best answer\"' as tokens, rather than stripping the '\') + System.out.println(splitLine("\\\"you 'all had' \\\"best answer\\\" me").toString()); + } + /** + * split up the line into tokens, removing intertoken whitespace, grouping + * quoted tokens, etc. does not currently honor \ before a quote properly (it + * leaves the \ before a " or ' in) + */ + private static List splitLine(String line) { + List rv = new ArrayList(); + if (line == null) return rv; + char l[] = line.toCharArray(); + int tokenStart = 0; + int cur = tokenStart; + int curQuote = -1; + while (cur < l.length) { + while ( (curQuote == -1) && (cur < l.length) && (isBlank(l[cur])) ) { + if (tokenStart != -1) { + if (cur - tokenStart > 0) + rv.add(new String(l, tokenStart, cur-tokenStart)); + else if (cur - tokenStart == 0) + rv.add(""); + } + curQuote = -1; + tokenStart = -1; + cur++; + } + if (cur >= l.length) + break; + if (tokenStart == -1) + tokenStart = cur; + if (isQuote(l[cur]) && ( (cur == 0) || (l[cur-1] != '\\') ) ) { + if (curQuote == l[cur]) { // end of the quoted token + if (cur - tokenStart > 0) + rv.add(new String(l, tokenStart, cur-tokenStart)); + else if (cur - tokenStart == 0) + rv.add(""); + curQuote = -1; + tokenStart = -1; + cur++; + } else if (curQuote != -1) { // different quote within the token (eg "hi y'all") + cur++; + } else { // quoted token begin + curQuote = l[cur]; + tokenStart++; + cur++; + } + } else { + cur++; + } + } + if (tokenStart != -1) + rv.add(new String(l, tokenStart, cur-tokenStart)); + + return rv; + } + private static boolean isBlank(char c) { + switch (c) { + case ' ': + case '\t': + case '\r': + case '\n': + case '\f': + return true; + default: + return false; + } + } + private static boolean isQuote(char c) { + switch (c) { + case '\'': + case '\"': + return true; + default: + return false; + } + } +} diff --git a/src/syndie/db/PostMenu.java b/src/syndie/db/PostMenu.java new file mode 100644 index 0000000..f9ad393 --- /dev/null +++ b/src/syndie/db/PostMenu.java @@ -0,0 +1,1925 @@ +package syndie.db; + +import java.io.*; +import java.net.URISyntaxException; +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.text.SimpleDateFormat; +import java.text.ParseException; +import java.util.*; +import net.i2p.crypto.KeyGenerator; +import net.i2p.data.*; +import syndie.Constants; +import syndie.data.*; + +/** + * + */ +class PostMenu implements TextEngine.Menu { + private TextEngine _engine; + /** text description of each indexed channel */ + private List _itemText; + /** internal channel id (Long) for each indexed item */ + private List _itemKeys; + /** if true, the items refer to a list of channels matching the requested criteria */ + private boolean _itemIsChannelList; + /** refers to the next index into the item lists that the user should be shown */ + private int _itemIteratorIndex; + /** current message the user is working on (if any) */ + private MessageInfo _currentMessage; + /** current list of file names to use as pages */ + private List _pageFiles; + /** current list of config (Properties) for each page */ + private List _pageConfig; + /** current list of file names to use as attachments */ + private List _attachmentFiles; + /** current list of config (Properties) for each attachment */ + private List _attachmentConfig; + /** filename to pull the channel avatar from */ + private String _avatarFile; + /** nym keys being listed */ + private List _listedNymKeys; + /** how we should prove who we are */ + private NymKey _authenticationKey; + /** how we should prove we are allowed to post in the target channel */ + private NymKey _authorizationKey; + /** list of references (ReferenceNode) to be delivered with the message */ + private List _referenceNodes; + /** list of parents (SyndieURI) of this message, with the most recent parent at index 0 */ + private List _parents; + /** use a publicly visible encryption key for the post so anyone can read it */ + private Boolean _publiclyReadable; + /** private read key to use when encrypting the post */ + private SessionKey _readKey; + /** pbe key root */ + private String _passphrase; + /** pbe key prompt */ + private String _passphrasePrompt; + /** + * files to delete after post creation or cancellation. this contains + * temp files built from stdin, etc. + */ + private List _toDelete; + + public PostMenu(TextEngine engine) { + _engine = engine; + _itemText = new ArrayList(); + _itemKeys = new ArrayList(); + _toDelete = new ArrayList(); + _itemIsChannelList = false; + _itemIteratorIndex = 0; + resetContent(); + } + private void resetContent() { + _currentMessage = null; + _pageFiles = new ArrayList(); + _pageConfig = new ArrayList(); + _attachmentFiles = new ArrayList(); + _attachmentConfig = new ArrayList(); + _listedNymKeys = new ArrayList(); + _authenticationKey = null; + _authorizationKey = null; + _avatarFile = null; + _referenceNodes = new ArrayList(); + _parents = new ArrayList(); + _publiclyReadable = null; + _passphrase = null; + _passphrasePrompt = null; + _readKey = null; + while (_toDelete.size() > 0) { + String filename = (String)_toDelete.remove(0); + File f = new File(filename); + f.delete(); + } + } + + public static final String NAME = "post"; + public String getName() { return NAME; } + public String getDescription() { return "posting menu"; } + public boolean requireLoggedIn() { return true; } + public void listCommands(UI ui) { + ui.statusMessage(" channels : display a list of channels the current nym can post to"); + if (_itemIsChannelList) { + ui.statusMessage(" next [--lines $num]: paginate through the channels, 10 or $num at a time"); + ui.statusMessage(" prev [--lines $num]: paginate through the channels, 10 or $num at a time"); + } + ui.statusMessage(" meta [--channel ($index|$hash)] : display the current channel's metadata"); + if (_currentMessage == null) { + ui.statusMessage(" create --channel ($index|$hash): begin the process of creating a new post"); + } else { + ui.statusMessage(" addPage [--page $num] --in ($filename|stdin) [--type $contentType]"); + ui.statusMessage(" listpages : display a list of pages currently sloted for posting"); + ui.statusMessage(" delpage $num : delete the given page"); + ui.statusMessage(" addattachment [--attachment $num] --in $filename [--type $contentType]"); + ui.statusMessage(" [--name $name] [--description $desc]"); + ui.statusMessage(" listattachments : display a list of attachments currently sloted for posting"); + ui.statusMessage(" delattachment $num"); + ui.statusMessage(" addref --in $file : load in references from the given file"); + ui.statusMessage(" listkeys [--scope $scope] [--type $type]"); + ui.statusMessage(" addref [--name $name] --uri $uri [--reftype $type] [--description $desc]"); + ui.statusMessage(" : add a single reference. the reftype can be 'recommend', 'ignore', etc"); + ui.statusMessage(" addref --readkey $keyHash --scope $scope [--name $name] [--description $desc]"); + ui.statusMessage(" : add a reference that includes the given channel read key (AES256)"); + ui.statusMessage(" addref --postkey $keyHash --scope $scope [--name $name] [--description $desc]"); + ui.statusMessage(" : add a reference that includes the given channel post key (DSA private)"); + ui.statusMessage(" addref --managekey $keyHash --scope $scope [--name $name] [--description $desc]"); + ui.statusMessage(" : add a reference that includes the given channel manage key (DSA private)"); + ui.statusMessage(" addref --replykey $keyHash --scope $scope [--name $name] [--description $desc]"); + ui.statusMessage(" : add a reference that includes the given channel's reply key (ElGamal private)"); + ui.statusMessage(" listrefs : display an indexed list of references already added"); + ui.statusMessage(" delref $index : delete the specified reference"); + ui.statusMessage(" addparent --uri $uri [--order $num]"); + ui.statusMessage(" : add the given syndie URI as a threaded parent to the new message"); + ui.statusMessage(" listparents : display a list of URIs this new post will be marked as"); + ui.statusMessage(" : replying to (most recent parent at index 0)"); + ui.statusMessage(" delparent $index"); + ui.statusMessage(" listauthkeys [--authorizedOnly $boolean]"); + ui.statusMessage(" : display an indexed list of signing keys that the nym has"); + ui.statusMessage(" : access to. if requested, only includes those keys which have"); + ui.statusMessage(" : been marked as authorized to post in the channel (or"); + ui.statusMessage(" : authorized to manage the channel)"); + ui.statusMessage(" authenticate $index: use the specified key to authenticate the post"); + ui.statusMessage(" authorize $index : use the specified key to authorize the post"); + ui.statusMessage(" listreadkeys : display a list of known channel read keys that we can use to"); + ui.statusMessage(" : encrypt the message"); + ui.statusMessage(" set --readkey (public|$index|pbe --passphrase $passphrase --prompt $prompt)"); + ui.statusMessage(" : if public, create a random key and publicize it in the public"); + ui.statusMessage(" : headers. if pbe, then derive a read key from the passphrase,"); + ui.statusMessage(" : publicizing the prompt in the public headers. Otherwise use the"); + ui.statusMessage(" : indexed read key for the channel"); + //ui.statusMessage(" set --cancel $uri : state that the given URI should be cancelled (ignored unless authorized)"); + ui.statusMessage(" set --messageId ($id|date) : specify the message Id, or if 'date', generate one based on the date"); + ui.statusMessage(" set --subject $subject : specify the message subject"); + ui.statusMessage(" set --avatar $filename : specify a message-specific avatar to use"); + ui.statusMessage(" set --encryptToReply $boolean"); + ui.statusMessage(" : if true, the message should be encrypted to the channel's reply key"); + ui.statusMessage(" : so that only the channel's owner (or designee) can read it, and the"); + ui.statusMessage(" : channel is included in the public header (if not authorized)"); + ui.statusMessage(" set --overwrite $uri : mark this message as a replacement for the given URI"); + ui.statusMessage(" set --expiration ($yyyyMMdd|none) : suggest a date on which the message can be discarded"); + ui.statusMessage(" set --forceNewThread $boolean : if true, branch off this message into a new thread"); + ui.statusMessage(" set --refuseReplies $boolean : if true, only the author can reply to this message in the same thread"); + ui.statusMessage(" set --publicTags [$tag[,$tag]*]: list of tags visible by anyone"); + ui.statusMessage(" set --privateTags [$tag[,$tag]*]: list of tags visible only by those authorized to read the message"); + ui.statusMessage(" preview [--page $n]: view the post as it will be seen"); + ui.statusMessage(" execute [--out $filename]"); + ui.statusMessage(" : actually generate the post, exporting it to the given file, and then"); + ui.statusMessage(" : importing it into the local database"); + ui.statusMessage(" cancel : clear the current create state without updating anything"); + } + } + public boolean processCommands(DBClient client, UI ui, Opts opts) { + String cmd = opts.getCommand(); + if ("channels".equalsIgnoreCase(cmd)) { + processChannels(client, ui, opts); + } else if ("next".equalsIgnoreCase(cmd)) { + processNext(client, ui, opts); + } else if ("prev".equalsIgnoreCase(cmd)) { + processPrev(client, ui, opts); + } else if ("meta".equalsIgnoreCase(cmd)) { + processMeta(client, ui, opts); + } else if ("cancel".equalsIgnoreCase(cmd)) { + resetContent(); + ui.statusMessage("Posting cancelled"); + ui.commandComplete(-1, null); + } else if ("create".equalsIgnoreCase(cmd)) { + processCreate(client, ui, opts); + } else if ("addpage".equalsIgnoreCase(cmd)) { + processAddPage(client, ui, opts); + } else if ("listpages".equalsIgnoreCase(cmd)) { + processListPages(client, ui, opts); + } else if ("delpage".equalsIgnoreCase(cmd)) { + processDelPage(client, ui, opts); + } else if ("addattachment".equalsIgnoreCase(cmd)) { + processAddAttachment(client, ui, opts); + } else if ("listattachments".equalsIgnoreCase(cmd)) { + processListAttachments(client, ui, opts); + } else if ("delattachment".equalsIgnoreCase(cmd)) { + processDelAttachment(client, ui, opts); + } else if ("listauthkeys".equalsIgnoreCase(cmd)) { + processListAuthKeys(client, ui, opts); + } else if ("authenticate".equalsIgnoreCase(cmd)) { + processAuthenticate(client, ui, opts); + } else if ("authorize".equalsIgnoreCase(cmd)) { + processAuthorize(client, ui, opts); + } else if ("listkeys".equalsIgnoreCase(cmd)) { + processListKeys(client, ui, opts); + } else if ("addref".equalsIgnoreCase(cmd)) { + processAddRef(client, ui, opts); + } else if ("listrefs".equalsIgnoreCase(cmd)) { + processListRefs(client, ui, opts); + } else if ("delref".equalsIgnoreCase(cmd)) { + processDelRef(client, ui, opts); + } else if ("addparent".equalsIgnoreCase(cmd)) { + processAddParent(client, ui, opts); + } else if ("listparents".equalsIgnoreCase(cmd)) { + processListParents(client, ui, opts); + } else if ("delparent".equalsIgnoreCase(cmd)) { + processDelParent(client, ui, opts); + } else if ("preview".equalsIgnoreCase(cmd)) { + processPreview(client, ui, opts); + } else if ("execute".equalsIgnoreCase(cmd)) { + processExecute(client, ui, opts); + } else if ("listreadkeys".equalsIgnoreCase(cmd)) { + processListReadKeys(client, ui, opts); + } else if ("set".equalsIgnoreCase(cmd)) { + processSet(client, ui, opts); + } else { + return false; + } + return true; + } + public List getMenuLocation(DBClient client, UI ui) { + List rv = new ArrayList(); + rv.add("post"); + if (_currentMessage != null) + rv.add("create"); + return rv; + } + + private static final SimpleDateFormat _dayFmt = new SimpleDateFormat("yyyy/MM/dd"); + private static final String SQL_LIST_MANAGED_CHANNELS = "SELECT channelId FROM channelManageKey WHERE authPubKey = ?"; + private static final String SQL_LIST_POST_CHANNELS = "SELECT channelId FROM channelPostKey WHERE authPubKey = ?"; + /** channels */ + private void processChannels(DBClient client, UI ui, Opts opts) { + _itemIteratorIndex = 0; + _itemIsChannelList = true; + _itemKeys.clear(); + _itemText.clear(); + + boolean manageOnly = false; + String cap = opts.getOptValue("capability"); + if ( (cap != null) && ("manage".equalsIgnoreCase(cap)) ) { + // if we want capability=manage, then include ident+manage chans. + // if we want capability=post, include ident+manage+post+publicPost chans + // (since we can post on channels we have the identity key for or can manage) + manageOnly = true; + } + + List manageKeys = client.getNymKeys(client.getLoggedInNymId(), client.getPass(), null, Constants.KEY_FUNCTION_MANAGE); + ui.debugMessage("nym has access to " + manageKeys.size() + " management keys"); + List pubKeys = new ArrayList(); + // first, go through and find all the 'identity' channels - those that we have + // the actual channel signing key for + for (int i = 0; i < manageKeys.size(); i++) { + NymKey key = (NymKey)manageKeys.get(i); + if (key.getAuthenticated()) { + SigningPrivateKey priv = new SigningPrivateKey(key.getData()); + SigningPublicKey pub = client.ctx().keyGenerator().getSigningPublicKey(priv); + pubKeys.add(pub); + Hash chan = pub.calculateHash(); + long chanId = client.getChannelId(chan); + if (chanId >= 0) { + ui.debugMessage("nym has the identity key for " + chan.toBase64()); + ChannelInfo info = client.getChannel(chanId); + _itemKeys.add(new Long(chanId)); + _itemText.add("Identity channel " + CommandImpl.strip(info.getName()) + " (" + chan.toBase64().substring(0,6) + "): " + CommandImpl.strip(info.getDescription())); + } else { + ui.debugMessage("nym has a key that is not an identity key (" + chan.toBase64() + ")"); + } + } + } + + // now, go through and see what other channels our management keys are + // authorized to manage (beyond their identity channels) + Connection con = client.con(); + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = con.prepareStatement(SQL_LIST_MANAGED_CHANNELS); + for (int i = 0; i < pubKeys.size(); i++) { + SigningPublicKey key = (SigningPublicKey)pubKeys.get(i); + stmt.setBytes(1, key.getData()); + rs = stmt.executeQuery(); + while (rs.next()) { + // channelId + long chanId = rs.getLong(1); + if (!rs.wasNull()) { + Long id = new Long(chanId); + if (!_itemKeys.contains(id)) { + ChannelInfo info = client.getChannel(chanId); + if (info != null) { + ui.debugMessage("nym has a key that is an explicit management key for " + info.getChannelHash().toBase64()); + _itemKeys.add(id); + _itemText.add("Managed channel " + CommandImpl.strip(info.getName()) + " (" + info.getChannelHash().toBase64().substring(0,6) + "): " + CommandImpl.strip(info.getDescription())); + } else { + ui.debugMessage("nym has a key that is an explicit management key for an unknown channel (" + chanId + ")"); + } + } + } + } + rs.close(); + } + } catch (SQLException se) { + ui.errorMessage("Internal error listing channels", se); + ui.commandComplete(-1, null); + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + + // continue on to see what channels our management keys are + // authorized to post in (beyond their identity and manageable channels) + stmt = null; + rs = null; + if (!manageOnly) { + try { + stmt = con.prepareStatement(SQL_LIST_POST_CHANNELS); + for (int i = 0; i < pubKeys.size(); i++) { + SigningPublicKey key = (SigningPublicKey)pubKeys.get(i); + stmt.setBytes(1, key.getData()); + rs = stmt.executeQuery(); + while (rs.next()) { + // channelId + long chanId = rs.getLong(1); + if (!rs.wasNull()) { + Long id = new Long(chanId); + if (!_itemKeys.contains(id)) { + ChannelInfo info = client.getChannel(chanId); + if (info != null) { + ui.debugMessage("nym has a key that is an explicit post key for " + info.getChannelHash().toBase64()); + _itemKeys.add(id); + _itemText.add("Authorized channel " + CommandImpl.strip(info.getName()) + " (" + info.getChannelHash().toBase64().substring(0,6) + "): " + CommandImpl.strip(info.getDescription())); + } else { + ui.debugMessage("nym has a key that is an explicit post key for an unknown channel (" + chanId + ")"); + } + } + } + } + rs.close(); + } + } catch (SQLException se) { + ui.errorMessage("Internal error listing channels", se); + ui.commandComplete(-1, null); + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + + List channelIds = client.getPublicPostingChannelIds(); + for (int i = 0; i < channelIds.size(); i++) { + Long id = (Long)channelIds.get(i); + if (!_itemKeys.contains(id)) { + ChannelInfo info = client.getChannel(id.longValue()); + if (info != null) { + _itemKeys.add(id); + _itemText.add("Public channel " + CommandImpl.strip(info.getName()) + " (" + info.getChannelHash().toBase64().substring(0,6) + "): " + CommandImpl.strip(info.getDescription())); + } + } + } + } + + ui.statusMessage(_itemKeys.size() + " channels matched - use 'next' to view them"); + ui.commandComplete(0, null); + } + + /** next [--lines $num] : iterate through the channels */ + private void processNext(DBClient client, UI ui, Opts opts) { + int num = (int)opts.getOptLong("lines", 10); + String name = "channels"; + if (_itemIsChannelList) { + if (_itemIteratorIndex >= _itemKeys.size()) { + ui.statusMessage("No more " + name + " - use 'prev' to review earlier " + name); + ui.commandComplete(0, null); + } else { + int end = Math.min(_itemIteratorIndex+num, _itemKeys.size()); + ui.statusMessage(name + " " + _itemIteratorIndex + " through " + (end-1) + " of " + (_itemKeys.size()-1)); + while (_itemIteratorIndex < end) { + String desc = (String)_itemText.get(_itemIteratorIndex); + ui.statusMessage(_itemIteratorIndex + ": " + desc); + _itemIteratorIndex++; + } + int remaining = _itemKeys.size() - _itemIteratorIndex; + if (remaining > 0) + ui.statusMessage(remaining + " " + name + " remaining"); + else + ui.statusMessage("No more " + name + " - use 'prev' to review earlier " + name); + ui.commandComplete(0, null); + } + } else { + ui.statusMessage("Cannot iterate through the list, as no channels have been selected"); + ui.commandComplete(-1, null); + } + } + + /** prev [--lines $num] : iterate through the channels */ + private void processPrev(DBClient client, UI ui, Opts opts) { + int num = (int)opts.getOptLong("lines", 10); + _itemIteratorIndex -= num; + if (_itemIteratorIndex < 0) + _itemIteratorIndex = 0; + processNext(client, ui, opts); + } + + /* create --channel ($index|$hash): begin the process of creating a new post */ + private void processCreate(DBClient client, UI ui, Opts opts) { + if (_currentMessage != null) { + ui.errorMessage("Cannot create a new message - an existing create process is already in progress"); + ui.errorMessage("Cancel or complete that process before continuing (with the cancel or execute commands)"); + ui.commandComplete(-1, null); + return; + } + + ChannelInfo channel = null; + String chan = opts.getOptValue("channel"); + if (chan != null) { + try { + int val = Integer.parseInt(chan); + if ( (val < 0) || (val >= _itemKeys.size()) ) { + ui.errorMessage("Channel index out of bounds"); + ui.commandComplete(-1, null); + return; + } + Long chanId = (Long)_itemKeys.get(val); + channel = client.getChannel(chanId.longValue()); + } catch (NumberFormatException nfe) { + ui.debugMessage("channel requested is not an index (" + chan + ")"); + // ok, not an integer, maybe its a full channel hash? + byte val[] = Base64.decode(chan); + if ( (val != null) && (val.length == Hash.HASH_LENGTH) ) { + long id = client.getChannelId(new Hash(val)); + if (id >= 0) { + channel = client.getChannel(id); + } else { + ui.errorMessage("Channel is not locally known: " + chan); + ui.commandComplete(-1, null); + return; + } + } else { + ui.errorMessage("Channel requested is not valid - either specify --channel $index or --channel $base64(channelHash)"); + ui.commandComplete(-1, null); + return; + } + } + } + if (channel == null) { + ui.errorMessage("Target channel must be specified"); + ui.commandComplete(-1, null); + return; + } + + resetContent(); + _currentMessage = new MessageInfo(); + _currentMessage.setTargetChannel(channel.getChannelHash()); + _currentMessage.setTargetChannelId(channel.getChannelId()); + // set the scope to the target (if we are authorized), or to the first + // channel we are authorized to post on + List priv = client.getNymKeys(client.getLoggedInNymId(), client.getPass(), null, null); + for (int i = 0; i < priv.size(); i++) { + NymKey curKey = (NymKey)priv.get(i); + if (Constants.KEY_FUNCTION_MANAGE.equals(curKey.getFunction()) || + Constants.KEY_FUNCTION_POST.equals(curKey.getFunction())) { + SigningPrivateKey privKey = new SigningPrivateKey(curKey.getData()); + SigningPublicKey pub = KeyGenerator.getSigningPublicKey(privKey); + if (channel.getAuthorizedManagers().contains(pub)) { + _currentMessage.setScopeChannelId(channel.getChannelId()); + break; + } else if (channel.getAuthorizedPosters().contains(pub)) { + _currentMessage.setScopeChannelId(channel.getChannelId()); + break; + } + } + } + // not authorized, so lets just set the default scope to our first one + if (_currentMessage.getScopeChannelId() < 0) { + for (int i = 0; i < priv.size(); i++) { + NymKey curKey = (NymKey)priv.get(i); + if (Constants.KEY_FUNCTION_MANAGE.equals(curKey.getFunction()) || + Constants.KEY_FUNCTION_POST.equals(curKey.getFunction())) { + SigningPrivateKey privKey = new SigningPrivateKey(curKey.getData()); + SigningPublicKey pub = KeyGenerator.getSigningPublicKey(privKey); + long chanId = client.getChannelId(pub.calculateHash()); + if (chanId >= 0) { + _currentMessage.setScopeChannelId(chanId); + break; + } + } + } + } + _currentMessage.setMessageId(createEdition(client)); + ui.statusMessage("Posting to '" + CommandImpl.strip(channel.getName()) + "' (" + channel.getChannelHash().toBase64().substring(0,6) + ")"); + + SigningPublicKey pub = getNymPublicKey(client); + if ( (pub != null) && (!channel.getChannelHash().equals(pub.calculateHash())) ) { + long id = client.getChannelId(pub.calculateHash()); + if (id >= 0) { + ChannelInfo author = client.getChannel(id); + _currentMessage.setAuthorChannelId(id);//pub.calculateHash()); + ui.statusMessage("Defaulting identity channel " + CommandImpl.strip(author.getName()) + " (" + pub.calculateHash().toBase64().substring(0,6) + ") as the author"); + } + } + + ui.statusMessage("Post creation process initiated"); + ui.statusMessage("Please specify fields as, and complete the post creation"); + ui.statusMessage("process with 'execute', or cancel the process with 'cancel'"); + ui.commandComplete(0, null); + } + + private SigningPublicKey getNymPublicKey(DBClient client) { + List manageKeys = client.getNymKeys(client.getLoggedInNymId(), client.getPass(), null, Constants.KEY_FUNCTION_MANAGE); + List pubKeys = new ArrayList(); + // find all the 'identity' channels - those that we have + // the actual channel signing key for + for (int i = 0; i < manageKeys.size(); i++) { + NymKey key = (NymKey)manageKeys.get(i); + if (key.getAuthenticated()) { + SigningPrivateKey priv = new SigningPrivateKey(key.getData()); + SigningPublicKey pub = client.ctx().keyGenerator().getSigningPublicKey(priv); + Hash chan = pub.calculateHash(); + long chanId = client.getChannelId(chan); + if (chanId >= 0) + pubKeys.add(pub); + } + } + if (pubKeys.size() == 1) { + return (SigningPublicKey)pubKeys.get(0); + } else { + return null; + } + } + + /** today's date, but with a randomized hhmmss.SSS component */ + private long createEdition(DBClient client) { + long now = System.currentTimeMillis(); + now -= (now % 24*60*60*1000); + now += client.ctx().random().nextLong(24*60*60*1000); + return now; + } + + private void processPreview(DBClient client, UI ui, Opts opts) { + if (_currentMessage == null) { + ui.errorMessage("No creation or update process in progress"); + ui.commandComplete(-1, null); + return; + } + ui.statusMessage(_currentMessage.toString()); + if (_avatarFile != null) + ui.statusMessage("Loading the message avatar from: " + _avatarFile); + + ui.statusMessage("Pages: " + _pageFiles.size()); + for (int i = 0; i < _pageFiles.size(); i++) { + String filename = (String)_pageFiles.get(i); + String type = ((Properties)_pageConfig.get(i)).getProperty(Constants.MSG_PAGE_CONTENT_TYPE); + ui.statusMessage("Page " + i + ": loaded from " + CommandImpl.strip(filename) + " (type: " + CommandImpl.strip(type) + ")"); + } + + ui.statusMessage("Attachments: " + _attachmentFiles.size()); + for (int i = 0; i < _attachmentFiles.size(); i++) { + String filename = (String)_attachmentFiles.get(i); + Properties cfg = (Properties)_attachmentConfig.get(i); + String type = cfg.getProperty(Constants.MSG_PAGE_CONTENT_TYPE); + String name = cfg.getProperty(Constants.MSG_ATTACH_NAME); + String desc = cfg.getProperty(Constants.MSG_ATTACH_DESCRIPTION); + ui.statusMessage("Attachment " + i + ": loaded from " + CommandImpl.strip(filename) + " (type: " + CommandImpl.strip(type) + ")"); + ui.statusMessage(" : suggested name: '" + CommandImpl.strip(name) + "', description: '" + CommandImpl.strip(desc) + "'"); + } + + if (_authenticationKey != null) { + SigningPrivateKey priv = new SigningPrivateKey(_authenticationKey.getData()); + SigningPublicKey pub = client.ctx().keyGenerator().getSigningPublicKey(priv); + ui.statusMessage("Authenticating with the private key for " + pub.calculateHash().toBase64().substring(0,6)); + } + if (_authorizationKey != null) { + SigningPrivateKey priv = new SigningPrivateKey(_authorizationKey.getData()); + SigningPublicKey pub = client.ctx().keyGenerator().getSigningPublicKey(priv); + ui.statusMessage("Authorizing with the private key for " + pub.calculateHash().toBase64().substring(0,6)); + } + + if (_referenceNodes.size() > 0) { + ui.statusMessage("References: "); + ListWalker w = new ListWalker(ui); + ReferenceNode.walk(_referenceNodes, w); + } + + ui.statusMessage("Parents (most recent first):"); + for (int i = 0; i < _parents.size(); i++) { + SyndieURI uri = (SyndieURI)_parents.get(i); + long id = client.getChannelId(uri.getScope()); + MessageInfo msg = null; + if (id >= 0) { + msg = client.getMessage(id, uri.getMessageId()); + if (msg != null) { + ui.statusMessage(i + ": " + msg.getTargetChannel().toBase64().substring(0,6) + + " - '" + CommandImpl.strip(msg.getSubject()) + "' (" + msg.getMessageId() + ")"); + } + } + if (msg == null) + ui.statusMessage(i + ": " + uri.getScope().toBase64().substring(0,6) + " (" + uri.getMessageId().longValue() + ")"); + } + + int page = (int)opts.getOptLong("page", -1); + if ( (page >= 0) && (page < _pageFiles.size()) ) { + String filename = (String)_pageFiles.get(page); + String type = ((Properties)_pageConfig.get(page)).getProperty(Constants.MSG_PAGE_CONTENT_TYPE); + ui.statusMessage("Page " + page + " (loaded from " + CommandImpl.strip(filename) + " (type: " + CommandImpl.strip(type) + ")"); + + File f = new File(filename); + try { + BufferedReader in = new BufferedReader(new InputStreamReader(new FileInputStream(f), "UTF-8")); + String line = null; + while ( (line = in.readLine()) != null) + ui.statusMessage(line); + } catch (IOException ioe) { + ui.errorMessage("Error previewing the page", ioe); + } + } + + ui.commandComplete(0, null); + } + + private void processMeta(DBClient client, UI ui, Opts opts) { + long channelIndex = -1; + Hash channel = null; + String chan = opts.getOptValue("channel"); + if (chan != null) { + try { + long val = Long.parseLong(chan); + channelIndex = val; + } catch (NumberFormatException nfe) { + ui.debugMessage("channel requested is not an index (" + chan + ")"); + // ok, not an integer, maybe its a full channel hash? + byte val[] = Base64.decode(chan); + if ( (val != null) && (val.length == Hash.HASH_LENGTH) ) { + channel = new Hash(val); + ui.debugMessage("channel requested is a hash (" + channel.toBase64() + ")"); + } else { + ui.errorMessage("Channel requested is not valid - either specify --channel $index or --channel $base64(channelHash)"); + ui.commandComplete(-1, null); + return; + } + } + } + + ChannelInfo info = null; + + if (_currentMessage != null) + info = client.getChannel(_currentMessage.getTargetChannelId()); + + long channelId = -1; + if ( (channelIndex >= 0) && (channelIndex < _itemKeys.size()) ) { + channelId = ((Long)_itemKeys.get((int)channelIndex)).longValue(); + info = client.getChannel(channelId); + } else if (channel != null) { + channelId = client.getChannelId(channel); + info = client.getChannel(channelId); + } + + if (info == null) { + ui.debugMessage("channelIndex=" + channelIndex + " itemKeySize: " + _itemKeys.size()); + ui.debugMessage("channel=" + channelIndex); + ui.errorMessage("Invalid or unknown channel requested"); + ui.commandComplete(-1, null); + return; + } + + ui.statusMessage(info.toString()); + } + + /** addPage [--page $num] --in ($filename|stdin) [--type $contentType] */ + private void processAddPage(DBClient client, UI ui, Opts opts) { + if (_currentMessage == null) { + ui.errorMessage("No posting in progress"); + ui.commandComplete(-1, null); + return; + } + String in = opts.getOptValue("in"); + if (in != null) { + int index = _pageFiles.indexOf(in); + if (index >= 0) { + ui.errorMessage("The file " + in + " is already slotted as page " + index); + ui.commandComplete(-1, null); + return; + } + } else { + ui.errorMessage("The file must be specified with --in $filename"); + ui.commandComplete(-1, null); + return; + } + String type = opts.getOptValue("type"); + if (type == null) + type = "text/plain"; + int page = (int)opts.getOptLong("page", _pageFiles.size()); + boolean deleteAfterPost = false; + File f = null; + if ("stdin".equalsIgnoreCase(in)) { + String content = ui.readStdIn(); + try { + f = File.createTempFile("stdin", ".txt", client.getTempDir()); + FileWriter out = new FileWriter(f); + out.write(content); + out.close(); + in = f.getPath(); + deleteAfterPost = true; + } catch (IOException ioe) { + ui.errorMessage("Error buffering the new page", ioe); + ui.commandComplete(-1, null); + return; + } + } + f = new File(in); + if (!f.exists()) { + ui.errorMessage("Page file does not exist"); + ui.commandComplete(-1, null); + } else if (!f.canRead()) { + ui.errorMessage("Page file is not readable"); + ui.commandComplete(-1, null); + } else if (!f.isFile()) { + ui.errorMessage("Page file is not a normal file"); + ui.commandComplete(-1, null); + } else if ( (page < 0) || (page > _pageFiles.size()) ) { + ui.errorMessage("Page index is out of range"); + ui.commandComplete(-1, null); + } else { + _pageFiles.add(page, in); + Properties cfg = new Properties(); + cfg.setProperty(Constants.MSG_PAGE_CONTENT_TYPE, CommandImpl.strip(type)); + _pageConfig.add(page, cfg); + if (deleteAfterPost) { + _toDelete.add(in); + ui.statusMessage("Page " + page + " read from standard input (size: " + f.length() + " bytes, type: " + CommandImpl.strip(type) + ")"); + } else { + ui.statusMessage("Page " + page + " configured to use " + CommandImpl.strip(in) + " (size: " + f.length() + " bytes, type: " + CommandImpl.strip(type) + ")"); + } + ui.commandComplete(0, null); + } + } + /** listpages : display a list of pages currently sloted for posting */ + private void processListPages(DBClient client, UI ui, Opts opts) { + if (_currentMessage == null) { + ui.errorMessage("No posting in progress"); + ui.commandComplete(-1, null); + return; + } + ui.statusMessage("Pages: " + _pageFiles.size()); + for (int i = 0; i < _pageFiles.size(); i++) { + String filename = (String)_pageFiles.get(i); + String type = ((Properties)_pageConfig.get(i)).getProperty(Constants.MSG_PAGE_CONTENT_TYPE); + ui.statusMessage("Page " + i + ": loaded from " + CommandImpl.strip(filename) + " (type: " + CommandImpl.strip(type) + ")"); + } + ui.commandComplete(-1, null); + } + /** delpage $num : delete the given page */ + private void processDelPage(DBClient client, UI ui, Opts opts) { + if (_currentMessage == null) { + ui.errorMessage("No posting in progress"); + ui.commandComplete(-1, null); + return; + } + + String arg = opts.getArg(0); + if (arg == null) { + ui.errorMessage("Usage: delpage $pageNumber"); + ui.commandComplete(-1, null); + return; + } + try { + int page = Integer.parseInt(arg); + if ( (page >= 0) && (page < _pageFiles.size()) ) { + _pageFiles.remove(page); + _pageConfig.remove(page); + ui.statusMessage("Not including page " + page); + ui.commandComplete(0, null); + } else { + ui.statusMessage("Page " + page + " out of range"); + ui.commandComplete(-1, null); + } + } catch (NumberFormatException nfe) { + ui.statusMessage("Invalid page requested"); + ui.commandComplete(-1, null); + } + } + + /** + * addattachment [--attachment $num] --in $filename [--type $contentType] [--name $name] [--description $desc] + */ + private void processAddAttachment(DBClient client, UI ui, Opts opts) { + if (_currentMessage == null) { + ui.errorMessage("No posting in progress"); + ui.commandComplete(-1, null); + return; + } + String in = opts.getOptValue("in"); + if (in != null) { + int index = _attachmentFiles.indexOf(in); + if (index >= 0) { + ui.errorMessage("The file " + in + " is already slotted as attachment " + index); + ui.commandComplete(-1, null); + return; + } + } + String type = opts.getOptValue("type"); + if (type == null) + type = "application/octet-stream"; + int num = (int)opts.getOptLong("attachment", _attachmentFiles.size()); + File f = new File(in); + if (!f.exists()) { + ui.errorMessage("Attachment file does not exist"); + ui.commandComplete(-1, null); + } else if (!f.canRead()) { + ui.errorMessage("Attachment file is not readable"); + ui.commandComplete(-1, null); + } else if (!f.isFile()) { + ui.errorMessage("Attachment file is not a normal file"); + ui.commandComplete(-1, null); + } else if ( (num < 0) || (num > _attachmentFiles.size()) ) { + ui.errorMessage("Attachment index is out of range"); + ui.commandComplete(-1, null); + } else { + _attachmentFiles.add(num, in); + String desc = opts.getOptValue("description"); + if (desc == null) desc = ""; + String name = opts.getOptValue("name"); + if (name == null) name = f.getName(); + ui.debugMessage("Options: " + opts.getOptNames()); + Properties cfg = new Properties(); + cfg.setProperty(Constants.MSG_ATTACH_CONTENT_TYPE, CommandImpl.strip(type)); + cfg.setProperty(Constants.MSG_ATTACH_DESCRIPTION, CommandImpl.strip(desc)); + cfg.setProperty(Constants.MSG_ATTACH_NAME, CommandImpl.strip(name)); + _attachmentConfig.add(num, cfg); + ui.statusMessage("Attachment " + num + + " (" + CommandImpl.strip(name) + " - '" + CommandImpl.strip(desc) + + "') configured to use " + CommandImpl.strip(in) + + " (type: " + CommandImpl.strip(type) + ")"); + ui.commandComplete(0, null); + } + } + /** listattachments : display a list of attachments currently sloted for posting */ + private void processListAttachments(DBClient client, UI ui, Opts opts) { + if (_currentMessage == null) { + ui.errorMessage("No posting in progress"); + ui.commandComplete(-1, null); + return; + } + ui.statusMessage("Attachments: " + _attachmentFiles.size()); + for (int i = 0; i < _attachmentFiles.size(); i++) { + String filename = (String)_attachmentFiles.get(i); + Properties cfg = (Properties)_attachmentConfig.get(i); + String type = cfg.getProperty(Constants.MSG_PAGE_CONTENT_TYPE); + String name = cfg.getProperty(Constants.MSG_ATTACH_NAME); + String desc = cfg.getProperty(Constants.MSG_ATTACH_DESCRIPTION); + ui.statusMessage("Attachment " + i + ": loaded from " + CommandImpl.strip(filename) + " (type: " + CommandImpl.strip(type) + ")"); + ui.statusMessage(" : suggested name: '" + CommandImpl.strip(name) + "', description: '" + CommandImpl.strip(desc) + "'"); + } + ui.commandComplete(-1, null); + } + /** delattachment $num */ + private void processDelAttachment(DBClient client, UI ui, Opts opts) { + if (_currentMessage == null) { + ui.errorMessage("No posting in progress"); + ui.commandComplete(-1, null); + return; + } + + String arg = opts.getArg(0); + if (arg == null) { + ui.errorMessage("Usage: delattachment $attachmentNumber"); + ui.commandComplete(-1, null); + return; + } + try { + int num = Integer.parseInt(arg); + if ( (num >= 0) && (num < _attachmentFiles.size()) ) { + _attachmentFiles.remove(num); + _attachmentConfig.remove(num); + ui.statusMessage("Not including attachment " + num); + ui.commandComplete(0, null); + } else { + ui.statusMessage("Attachment " + num + " out of range"); + ui.commandComplete(-1, null); + } + } catch (NumberFormatException nfe) { + ui.statusMessage("Invalid attachment requested"); + ui.commandComplete(-1, null); + } + } + + /** + * listauthkeys [--authorizedOnly $boolean] + * display an indexed list of signing keys that the nym has access to. if + * requested, only includes those keys which have been marked as authorized to + * post in the channel (or authorized to manage the channel) + */ + private void processListAuthKeys(DBClient client, UI ui, Opts opts) { + if ( (_currentMessage == null) || (_currentMessage.getTargetChannel() == null) ) { + ui.errorMessage("Can only list keys once a target channel has been selected"); + ui.commandComplete(-1, null); + return; + } + _listedNymKeys.clear(); + boolean auth = opts.getOptBoolean("authorizedOnly", true); + Hash scope = _currentMessage.getTargetChannel(); + if (!auth) + scope = null; + List keys = client.getNymKeys(client.getLoggedInNymId(), client.getPass(), scope, Constants.KEY_FUNCTION_MANAGE); + for (int i = 0; i < keys.size(); i++) { + NymKey key = (NymKey)keys.get(i); + ui.statusMessage("key " + _listedNymKeys.size() + ": " + key.getType() + " for " + key.getChannel().toBase64().substring(0,6) + " (authenticated? " + key.getAuthenticated() + ")"); + _listedNymKeys.add(key); + } + keys = client.getNymKeys(client.getLoggedInNymId(), client.getPass(), scope, Constants.KEY_FUNCTION_POST); + for (int i = 0; i < keys.size(); i++) { + NymKey key = (NymKey)keys.get(i); + ui.statusMessage("key " + _listedNymKeys.size() + ": " + key.getType() + " for " + key.getChannel().toBase64().substring(0,6) + " (authenticated? " + key.getAuthenticated() + ")"); + _listedNymKeys.add(key); + } + // now offer the manage keys for authentication only + keys = client.getNymKeys(client.getLoggedInNymId(), client.getPass(), null, Constants.KEY_FUNCTION_MANAGE); + for (int i = 0; i < keys.size(); i++) { + NymKey key = (NymKey)keys.get(i); + SigningPrivateKey priv = new SigningPrivateKey(key.getData()); + SigningPublicKey pub = client.ctx().keyGenerator().getSigningPublicKey(priv); + if (key.getChannel().equals(pub.calculateHash())) { + ui.statusMessage("identity key " + _listedNymKeys.size() + ": " + key.getChannel().toBase64().substring(0,6) + " (for authentication only)"); + _listedNymKeys.add(key); + } + } + ui.commandComplete(0, null); + } + + /** authenticate $index */ + private void processAuthenticate(DBClient client, UI ui, Opts opts) { + if (_listedNymKeys.size() <= 0) { + ui.errorMessage("No keys listed (list them through 'listauthkeys')"); + ui.commandComplete(-1, null); + return; + } + String arg = opts.getArg(0); + if (arg == null) { + ui.errorMessage("Usage: authenticate $num"); + ui.commandComplete(-1, null); + return; + } + try { + int num = Integer.parseInt(arg); + if ( (num >= 0) && (num < _listedNymKeys.size()) ) { + _authenticationKey = (NymKey)_listedNymKeys.get(num); + SigningPrivateKey priv = new SigningPrivateKey(_authenticationKey.getData()); + SigningPublicKey pub = client.ctx().keyGenerator().getSigningPublicKey(priv); + long authenticationId = client.getChannelId(pub.calculateHash()); + _currentMessage.setScopeChannelId(authenticationId); + ui.statusMessage("Authenticating with the private key for " + pub.calculateHash().toBase64().substring(0,6)); + ui.commandComplete(0, null); + } else { + ui.errorMessage("Authentication index out of range"); + ui.commandComplete(-1, null); + } + } catch (NumberFormatException nfe) { + ui.errorMessage("Invalid authentication index"); + ui.commandComplete(-1, null); + } + } + /** authorize $index */ + private void processAuthorize(DBClient client, UI ui, Opts opts) { + if (_listedNymKeys.size() <= 0) { + ui.errorMessage("No keys listed (list them through 'listauthkeys')"); + ui.commandComplete(-1, null); + return; + } + String arg = opts.getArg(0); + if (arg == null) { + ui.errorMessage("Usage: authorize $num"); + ui.commandComplete(-1, null); + return; + } + try { + int num = Integer.parseInt(arg); + if ( (num >= 0) && (num < _listedNymKeys.size()) ) { + _authorizationKey = (NymKey)_listedNymKeys.get(num); + SigningPrivateKey priv = new SigningPrivateKey(_authorizationKey.getData()); + SigningPublicKey pub = client.ctx().keyGenerator().getSigningPublicKey(priv); + ui.statusMessage("Authorizing with the private key for " + pub.calculateHash().toBase64().substring(0,6)); + ui.commandComplete(0, null); + } else { + ui.errorMessage("Authorization index out of range"); + ui.commandComplete(-1, null); + } + } catch (NumberFormatException nfe) { + ui.errorMessage("Invalid authorization index"); + ui.commandComplete(-1, null); + } + } + + /** listkeys [--scope $scope] [--type $type] */ + private void processListKeys(DBClient client, UI ui, Opts opts) { + byte chan[] = opts.getOptBytes("scope"); + Hash scope = null; + if (chan != null) + scope = new Hash(chan); + String type = opts.getOptValue("type"); + if (type != null) { + if (!Constants.KEY_FUNCTION_MANAGE.equalsIgnoreCase(type) && + !Constants.KEY_FUNCTION_POST.equalsIgnoreCase(type) && + !Constants.KEY_FUNCTION_READ.equalsIgnoreCase(type) && + !Constants.KEY_FUNCTION_REPLY.equalsIgnoreCase(type)) { + ui.errorMessage("Key type must be one of the following:"); + ui.errorMessage(Constants.KEY_FUNCTION_MANAGE + " (for channel management)"); + ui.errorMessage(Constants.KEY_FUNCTION_POST + " (for posting to a channel)"); + ui.errorMessage(Constants.KEY_FUNCTION_READ + " (for reading a channel)"); + ui.errorMessage(Constants.KEY_FUNCTION_REPLY+ " (for decrypting private replies on a channel)"); + ui.commandComplete(-1, null); + return; + } + } + List keys = client.getNymKeys(client.getLoggedInNymId(), client.getPass(), scope, type); + TreeMap keysByScope = new TreeMap(); + for (int i = 0; i < keys.size(); i++) { + NymKey key = (NymKey)keys.get(i); + List scopeKeys = (List)keysByScope.get(key.getChannel().toBase64()); + if (scopeKeys == null) { + scopeKeys = new ArrayList(); + keysByScope.put(key.getChannel().toBase64(), scopeKeys); + } + scopeKeys.add(key); + } + for (Iterator iter = keysByScope.values().iterator(); iter.hasNext(); ) { + List scopeKeys = (List)iter.next(); + if (scopeKeys.size() <= 0) continue; + Hash chanHash = ((NymKey)scopeKeys.get(0)).getChannel(); + long chanId = client.getChannelId(chanHash); + ChannelInfo info = null; + if (chanId >= 0) + info = client.getChannel(chanId); + if (info != null) + ui.statusMessage("Private keys for '" + CommandImpl.strip(info.getName()) + "' (" + chanHash.toBase64() + ")"); + else + ui.statusMessage("Private keys for unknown (" + chanHash.toBase64() + ")"); + for (int i = 0; i < scopeKeys.size(); i++) { + NymKey key = (NymKey)scopeKeys.get(i); + if (Constants.KEY_FUNCTION_MANAGE.equalsIgnoreCase(key.getFunction())) { + SigningPrivateKey priv = new SigningPrivateKey(key.getData()); + SigningPublicKey pub = client.ctx().keyGenerator().getSigningPublicKey(priv); + if (pub.calculateHash().equals(chanHash)) { + ui.statusMessage("- identity key: " + client.ctx().sha().calculateHash(key.getData()).toBase64()); + } else { + ui.statusMessage("- manage key (" + + (key.getAuthenticated()?"authenticated":"not authenticated") + + "): " + client.ctx().sha().calculateHash(key.getData()).toBase64()); + } + } else if (Constants.KEY_FUNCTION_POST.equalsIgnoreCase(key.getFunction())) { + ui.statusMessage("- post key (" + + (key.getAuthenticated()?"authenticated":"not authenticated") + + "): " + client.ctx().sha().calculateHash(key.getData()).toBase64()); + } else if (Constants.KEY_FUNCTION_READ.equalsIgnoreCase(key.getFunction())) { + ui.statusMessage("- read key (" + + (key.getAuthenticated()?"authenticated":"not authenticated") + + "): " + client.ctx().sha().calculateHash(key.getData()).toBase64()); + } else if (Constants.KEY_FUNCTION_REPLY.equalsIgnoreCase(key.getFunction())) { + ui.statusMessage("- reply key (" + + (key.getAuthenticated()?"authenticated":"not authenticated") + + "): " + client.ctx().sha().calculateHash(key.getData()).toBase64()); + } else { + ui.statusMessage("Channel key of unknown type [" + key.getFunction() + "] (" + + (key.getAuthenticated()?"authenticated":"not authenticated") + + "): " + client.ctx().sha().calculateHash(key.getData()).toBase64()); + } + } + } + ui.commandComplete(0, null); + } + + /** + * addref (--filename | [--name $name] --uri $uri [--reftype $type] [--description $desc]) + * + * addref --readkey $keyHash --scope $scope [--name $name] [--description $desc] + * add a reference that includes the given channel read key (AES256) + * addref --postkey $keyHash --scope $scope [--name $name] [--description $desc] + * add a reference that includes the given channel post key (DSA private) + * addref --managekey $keyHash --scope $scope [--name $name] [--description $desc] + * add a reference that includes the given channel manage key (DSA private) + * addref --replykey $keyHash --scope $scope [--name $name] [--description $desc] + * add a reference that includes the given channel's reply key (ElGamal private) + */ + private void processAddRef(DBClient client, UI ui, Opts opts) { + String filename = opts.getOptValue("filename"); + if (filename != null) { + FileInputStream in = null; + try { + in = new FileInputStream(filename); + List roots = ReferenceNode.buildTree(in); + _referenceNodes.addAll(roots); + Walker w = new Walker(); + ReferenceNode.walk(roots, w); + ui.statusMessage("Added " + w.getNodeCount() + " references"); + return; + } catch (IOException ioe) { + ui.errorMessage("Cannot add references from " + filename, ioe); + ui.commandComplete(-1, null); + return; + } + } + + String name = opts.getOptValue("name"); + String uriStr = opts.getOptValue("uri"); + String type = opts.getOptValue("reftype"); + String desc = opts.getOptValue("description"); + + if (opts.getOptValue("readkey") != null) { + type = "channel read key"; + byte channel[] = opts.getOptBytes("scope"); + byte keyHash[] = opts.getOptBytes("readkey"); + List keys = client.getReadKeys(new Hash(channel), client.getLoggedInNymId(), client.getPass()); + ui.debugMessage("read keys for scope " + Base64.encode(channel) + ": " + keys.size() + + " (looking for " + Base64.encode(keyHash) + ")"); + for (int i = 0; i < keys.size(); i++) { + SessionKey key = (SessionKey)keys.get(i); + Hash calcHash = key.calculateHash(); + ui.debugMessage("key " + i + " has hash: " + calcHash.toBase64() + " (data: " + Base64.encode(key.getData()) + ")"); + if (DataHelper.eq(calcHash.getData(), keyHash)) { + SyndieURI uri = SyndieURI.createKey(new Hash(channel), key); + uriStr = uri.toString(); + break; + } + } + } else if (opts.getOptValue("postkey") != null) { + type = "channel post key"; + byte channel[] = opts.getOptBytes("scope"); + byte keyHash[] = opts.getOptBytes("postkey"); + Hash chan = new Hash(channel); + List keys = client.getNymKeys(client.getLoggedInNymId(), client.getPass(), chan, Constants.KEY_FUNCTION_POST); + ui.debugMessage("post keys for scope " + Base64.encode(channel) + ": " + keys.size() + + " (looking for " + Base64.encode(keyHash) + ")"); + for (int i = 0; i < keys.size(); i++) { + NymKey key = (NymKey)keys.get(i); + Hash calcHash = client.ctx().sha().calculateHash(key.getData()); + if (DataHelper.eq(calcHash.getData(), keyHash)) { + SigningPrivateKey priv = new SigningPrivateKey(key.getData()); + SigningPublicKey expectedPub = client.ctx().keyGenerator().getSigningPublicKey(priv); + long channelId = client.getChannelId(chan); + ChannelInfo info = client.getChannel(channelId); + + Set postKeys = info.getAuthorizedPosters(); + boolean authorized = false; + for (Iterator iter = postKeys.iterator(); iter.hasNext(); ) { + SigningPublicKey pub = (SigningPublicKey)iter.next(); + if (pub.equals(expectedPub)) { + authorized = true; + break; + } + } + if (!authorized) { + Set manageKeys = info.getAuthorizedManagers(); + for (Iterator iter = manageKeys.iterator(); iter.hasNext(); ) { + SigningPublicKey pub = (SigningPublicKey)iter.next(); + if (pub.equals(expectedPub)) { + authorized = true; + break; + } + } + } + if (!authorized) { + if (info.getIdentKey().equals(expectedPub)) { + authorized = true; + } + } + + if (!authorized) { + ui.errorMessage("The specified channel post key is not authorized to post to the channel"); + return; + } + SyndieURI uri = SyndieURI.createKey(chan, Constants.KEY_FUNCTION_POST, priv); + uriStr = uri.toString(); + break; + } + } + } + + if ( (opts.getOptValue("managekey") != null) || + ( (opts.getOptValue("postkey") != null) && (uriStr == null) ) ) { // manage keys can be used to post + byte keyHash[] = null; + String keyType = null; + if (opts.getOptValue("postkey") != null) { + type = "channel post key"; + keyHash = opts.getOptBytes("postkey"); + keyType = Constants.KEY_FUNCTION_POST; + } else { + type = "channel manage key"; + keyHash = opts.getOptBytes("managekey"); + keyType = Constants.KEY_FUNCTION_MANAGE; + } + byte channel[] = opts.getOptBytes("scope"); + Hash chan = new Hash(channel); + List keys = client.getNymKeys(client.getLoggedInNymId(), client.getPass(), chan, Constants.KEY_FUNCTION_MANAGE); + ui.debugMessage("manage keys for scope " + Base64.encode(channel) + ": " + keys.size() + + " (looking for " + Base64.encode(keyHash) + ")"); + for (int i = 0; i < keys.size(); i++) { + NymKey key = (NymKey)keys.get(i); + Hash calcHash = client.ctx().sha().calculateHash(key.getData()); + ui.debugMessage("key " + i + " has hash: " + calcHash.toBase64()); + if (DataHelper.eq(calcHash.getData(), keyHash)) { + SigningPrivateKey priv = new SigningPrivateKey(key.getData()); + SigningPublicKey expectedPub = client.ctx().keyGenerator().getSigningPublicKey(priv); + long channelId = client.getChannelId(chan); + ChannelInfo info = client.getChannel(channelId); + + if (info == null) { + ui.errorMessage("We cannot verify the authorization of the key, as the channel is not known"); + return; + } + + ui.debugMessage("channel found (" + channelId + "/" + info.getName() + ")"); + boolean authorized = false; + Set manageKeys = info.getAuthorizedManagers(); + for (Iterator iter = manageKeys.iterator(); iter.hasNext(); ) { + SigningPublicKey pub = (SigningPublicKey)iter.next(); + if (pub.equals(expectedPub)) { + ui.debugMessage("Key is one of the authorized manager keys"); + authorized = true; + break; + } + } + if (!authorized) { + if (info.getIdentKey().equals(expectedPub)) { + ui.debugMessage("Key is the identity key"); + authorized = true; + } + } + + if (!authorized) { + ui.errorMessage("The specified channel manage key is not authorized to manage the channel"); + return; + } + SyndieURI uri = SyndieURI.createKey(chan, keyType, priv); + uriStr = uri.toString(); + ui.debugMessage("URI: " + uriStr); + break; + } + } + } else if (opts.getOptValue("replykey") != null) { + type = "channel reply key"; + byte channel[] = opts.getOptBytes("scope"); + byte keyHash[] = opts.getOptBytes("replykey"); + Hash chan = new Hash(channel); + List keys = client.getNymKeys(client.getLoggedInNymId(), client.getPass(), chan, Constants.KEY_FUNCTION_REPLY); + for (int i = 0; i < keys.size(); i++) { + NymKey key = (NymKey)keys.get(i); + Hash calcHash = client.ctx().sha().calculateHash(key.getData()); + if (DataHelper.eq(calcHash.getData(), keyHash)) { + PrivateKey priv = new PrivateKey(key.getData()); + PublicKey expectedPub = client.ctx().keyGenerator().getPublicKey(priv); + long channelId = client.getChannelId(chan); + ChannelInfo info = client.getChannel(channelId); + + boolean authorized = false; + if (info.getEncryptKey().equals(expectedPub)) + authorized = true; + + if (!authorized) { + ui.errorMessage("The specified channel reply key is not authorized to decrypt the channel's replies"); + return; + } + SyndieURI uri = SyndieURI.createKey(chan, priv); + uriStr = uri.toString(); + break; + } + } + } + + if (uriStr == null) { + ui.errorMessage("URI is required (--uri syndieURI)"); + ui.commandComplete(-1, null); + return; + } + SyndieURI uri = null; + try { + uri = new SyndieURI(uriStr); + } catch (URISyntaxException use) { + ui.errorMessage("URI is not valid (" + uriStr + ")", use); + ui.commandComplete(-1, null); + return; + } + + if (name == null) name = type; + _referenceNodes.add(new ReferenceNode(name, uri, desc, type)); + ui.statusMessage("Reference added"); + } + + private class Walker implements ReferenceNode.Visitor { + private int _nodes; + public Walker() { _nodes = 0; } + public void visit(ReferenceNode node, int depth, int siblingOrder) { _nodes++; } + public int getNodeCount() { return _nodes; } + } + + /** listrefs: display a list of references already added, prefixed by an index */ + private void processListRefs(DBClient client, UI ui, Opts opts) { + if (_currentMessage == null) { + ui.errorMessage("Can only list references once a target channel has been selected"); + ui.commandComplete(-1, null); + return; + } + + ui.statusMessage("References: "); + ListWalker w = new ListWalker(ui); + ReferenceNode.walk(_referenceNodes, w); + ui.commandComplete(0, null); + } + + private class ListWalker implements ReferenceNode.Visitor { + private UI _ui; + private int _nodes; + public ListWalker(UI ui) { _ui = ui; _nodes = 0; } + public void visit(ReferenceNode node, int indent, int siblingOrder) { + StringBuffer walked = new StringBuffer(); + walked.append(_nodes).append(": "); + for (int i = 0; i < indent; i++) + walked.append('\t'); + if (node.getName() != null) + walked.append('\"').append(CommandImpl.strip(node.getName())).append("\" "); + if (node.getURI() != null) + walked.append(node.getURI().toString()); + _ui.statusMessage(walked.toString()); + walked.setLength(0); + walked.append(" "); + for (int i = 0; i < indent; i++) + walked.append('\t'); + if (node.getDescription() != null) + walked.append(CommandImpl.strip(node.getDescription())).append(" "); + if (node.getReferenceType() != null) + walked.append("(type: ").append(CommandImpl.strip(node.getReferenceType())).append(")"); + _ui.statusMessage(walked.toString()); + _nodes++; + } + } + + /** delref $index */ + private void processDelRef(DBClient client, UI ui, Opts opts) { + if (_referenceNodes.size() <= 0) { + ui.errorMessage("No references specified"); + ui.commandComplete(-1, null); + return; + } + String arg = opts.getArg(0); + if (arg == null) { + ui.errorMessage("Usage: delref $num"); + ui.commandComplete(-1, null); + return; + } + try { + int num = Integer.parseInt(arg); + DelWalker w = new DelWalker(ui, num); + ReferenceNode.walk(_referenceNodes, w); + if (w.refDeleted()) { + ui.commandComplete(0, null); + } else { + ui.errorMessage("No reference at index " + num); + ui.commandComplete(-1, null); + } + } catch (NumberFormatException nfe) { + ui.errorMessage("Invalid reference number", nfe); + ui.commandComplete(-1, null); + } + } + + private class DelWalker implements ReferenceNode.Visitor { + private UI _ui; + private int _nodes; + private int _toDelete; + private boolean _deleted; + public DelWalker(UI ui, int toDelete) { + _ui = ui; + _nodes = 0; + _toDelete = toDelete; + _deleted = false; + } + public boolean refDeleted() { return _deleted; } + public void visit(ReferenceNode node, int indent, int siblingOrder) { + if (_nodes > _toDelete) { + return; + } else if (_nodes == _toDelete) { + _nodes++; + if (node.getChildCount() == 0) { + _ui.statusMessage("Removing reference node " + _toDelete + " (" + node.getName() + ")"); + ReferenceNode parent = node.getParent(); + if (parent != null) { + parent.removeChild(node); + } else { + for (int i = 0; i < _referenceNodes.size(); i++) { + if (_referenceNodes.get(i) == node) { + _referenceNodes.remove(i); + break; + } + } + } + _deleted = true; + return; + } else { + _ui.errorMessage("Not removing reference node " + _toDelete + " - please remove its children first"); + return; + } + } else { + _nodes++; + } + } + } + + /** addparent --uri $uri [--order $num] */ + private void processAddParent(DBClient client, UI ui, Opts opts) { + if (_currentMessage == null) { + ui.errorMessage("Can only add parents once a target channel has been selected"); + ui.commandComplete(-1, null); + return; + } + String uriStr = opts.getOptValue("uri"); + int index = (int)opts.getOptLong("order", _parents.size()); + SyndieURI uri = null; + try { + uri = new SyndieURI(uriStr); + if ( (uri.getScope() != null) && (uri.getMessageId() != null) ) { + if ( (index >= 0) && (index <= _parents.size()) ) { + _parents.add(index, uri); + ui.statusMessage("Parent URI added"); + ui.commandComplete(0, null); + } else { + ui.errorMessage("Order is out of range"); + ui.commandComplete(-1, null); + } + } else { + ui.errorMessage("URI is valid, but does not refer to a message"); + ui.commandComplete(-1, null); + } + } catch (URISyntaxException use) { + ui.errorMessage("URI is not valid", use); + ui.commandComplete(-1, null); + } + } + + /** listparents : display a list of URIs this new post will be marked as replying to (most recent parent at index 0) */ + private void processListParents(DBClient client, UI ui, Opts opts) { + ui.statusMessage("Parents (most recent first):"); + for (int i = 0; i < _parents.size(); i++) { + SyndieURI uri = (SyndieURI)_parents.get(i); + long id = client.getChannelId(uri.getScope()); + MessageInfo msg = null; + if (id >= 0) { + msg = client.getMessage(id, uri.getMessageId()); + if (msg != null) { + ui.statusMessage(i + ": " + msg.getTargetChannel().toBase64().substring(0,6) + + " - '" + CommandImpl.strip(msg.getSubject()) + "' (" + msg.getMessageId() + ")"); + } + } + if (msg == null) + ui.statusMessage(i + ": " + uri.getScope().toBase64().substring(0,6) + " (" + uri.getMessageId().longValue() + ")"); + } + ui.commandComplete(0, null); + } + /** delparent $index */ + private void processDelParent(DBClient client, UI ui, Opts opts) { + if (_parents.size() <= 0) { + ui.errorMessage("No parents specified"); + ui.commandComplete(-1, null); + return; + } + String arg = opts.getArg(0); + if (arg == null) { + ui.errorMessage("Usage: delparent $num"); + ui.commandComplete(-1, null); + return; + } + try { + int num = Integer.parseInt(arg); + if ( (num >= 0) && (num < _parents.size()) ) { + SyndieURI uri = (SyndieURI)_parents.remove(num); + ui.statusMessage("Parent removed: " + uri); + ui.commandComplete(0, null); + } else { + ui.errorMessage("Index out of bounds"); + ui.commandComplete(-1, null); + } + } catch (NumberFormatException nfe) { + ui.errorMessage("Invalid index", nfe); + ui.commandComplete(-1, null); + } + } + + /** + * execute [--out $filename] : actually generate the post, exporting it to + * the given file, and then importing it into the local database + */ + private void processExecute(DBClient client, UI ui, Opts opts) { + if (_currentMessage == null) { + ui.errorMessage("No post in progress"); + ui.commandComplete(-1, null); + return; + } + + long scopeId = _currentMessage.getScopeChannelId(); + if (scopeId < 0) { + ui.errorMessage("No scope specified?"); + ui.commandComplete(-1, null); + return; + } + ChannelInfo scopeChan = client.getChannel(scopeId); // not necessarily == targetChannelId! + + String out = opts.getOptValue("out"); + if (out == null) { + File chanDir = new File(client.getOutboundDir(), scopeChan.getChannelHash().toBase64()); + chanDir.mkdirs(); + File msgFile = new File(chanDir, _currentMessage.getMessageId() + Constants.FILENAME_SUFFIX); + out = msgFile.getPath(); + //ui.errorMessage("Output file must be specified with --out $filename"); + //ui.commandComplete(-1, null); + //return; + } + + File tmpDir = client.getTempDir(); + tmpDir.mkdirs(); + + List cfgFiles = new ArrayList(); + File refFile = null; + + MessageGen cmd = new MessageGen(); + Opts genOpts = new Opts(); + genOpts.setCommand("messagegen"); + if (_currentMessage.getTargetChannel() != null) { + genOpts.setOptValue("targetChannel", _currentMessage.getTargetChannel().toBase64()); + } + genOpts.addOptValue("scopeChannel", scopeChan.getChannelHash().toBase64()); + + for (int i = 0; i < _pageFiles.size(); i++) { + String filename = (String)_pageFiles.get(i); + Properties cfg = (Properties)_pageConfig.get(i); + FileOutputStream fos = null; + try { + File cfgFile = File.createTempFile("pageConfig", ""+ i, tmpDir); + fos = new FileOutputStream(cfgFile); + for (Iterator iter = cfg.keySet().iterator(); iter.hasNext(); ) { + String name = (String)iter.next(); + String val = cfg.getProperty(name); + fos.write(DataHelper.getUTF8(CommandImpl.strip(name) + "=" + CommandImpl.strip(val.trim()) + "\n")); + } + fos.close(); + fos = null; + cfgFiles.add(cfgFile); + genOpts.setOptValue("page" + i, filename); + genOpts.setOptValue("page" + i + "-config", cfgFile.getPath()); + } catch (IOException ioe) { + ui.errorMessage("Error writing out the configuration for page " + i, ioe); + ui.commandComplete(-1, null); + return; + } + } + + for (int i = 0; i < _attachmentFiles.size(); i++) { + String filename = (String)_attachmentFiles.get(i); + Properties cfg = (Properties)_attachmentConfig.get(i); + FileOutputStream fos = null; + try { + File cfgFile = File.createTempFile("attachConfig", ""+ i, tmpDir); + fos = new FileOutputStream(cfgFile); + for (Iterator iter = cfg.keySet().iterator(); iter.hasNext(); ) { + String name = (String)iter.next(); + String val = cfg.getProperty(name); + fos.write(DataHelper.getUTF8(CommandImpl.strip(name) + "=" + CommandImpl.strip(val.trim()) + "\n")); + } + fos.close(); + fos = null; + cfgFiles.add(cfgFile); + genOpts.setOptValue("attach" + i, filename); + genOpts.setOptValue("attach" + i + "-config", cfgFile.getPath()); + } catch (IOException ioe) { + ui.errorMessage("Error writing out the configuration for attachment " + i, ioe); + ui.commandComplete(-1, null); + return; + } + } + + if (_authenticationKey != null) + //genOpts.setOptValue("authenticationKey", Base64.encode(_authenticationKey.getData())); + genOpts.setOptValue("authenticationKey", client.ctx().sha().calculateHash(_authenticationKey.getData()).toBase64()); + if (_authorizationKey != null) { + //genOpts.setOptValue("authorizationKey", Base64.encode(_authorizationKey.getData())); + genOpts.setOptValue("authorizationKey", client.ctx().sha().calculateHash(_authorizationKey.getData()).toBase64()); + } else { + boolean noAuthRequired = false; + if (_currentMessage.getTargetChannelId() >= 0) { + ChannelInfo target = client.getChannel(_currentMessage.getTargetChannelId()); + if (target.getAllowPublicPosts()) { + noAuthRequired = true; + } else if (target.getAllowPublicReplies()) { + List parents = _currentMessage.getHierarchy(); + if (parents != null) { + for (int i = 0; i < parents.size(); i++) { + SyndieURI parent = (SyndieURI)parents.get(i); + Set allowed = new HashSet(); + for (Iterator iter = target.getAuthorizedManagers().iterator(); iter.hasNext(); ) + allowed.add(((SigningPublicKey)iter.next()).calculateHash()); + for (Iterator iter = target.getAuthorizedPosters().iterator(); iter.hasNext(); ) + allowed.add(((SigningPublicKey)iter.next()).calculateHash()); + allowed.add(target.getChannelHash()); + if (allowed.contains(parent.getScope())) { + noAuthRequired = true; + break; + } + } + } + } + } + if (!noAuthRequired) + genOpts.setOptValue("postAsUnauthorized", "true"); + } + + if (_currentMessage.getMessageId() >= 0) + genOpts.setOptValue("messageId", Long.toString(_currentMessage.getMessageId())); + if (_currentMessage.getSubject() != null) + genOpts.setOptValue("subject", _currentMessage.getSubject()); + + if ( (_passphrase != null) && (_passphrasePrompt != null) ) { + genOpts.setOptValue("bodyPassphrase", CommandImpl.strip(_passphrase)); + genOpts.setOptValue("bodyPassphrasePrompt", CommandImpl.strip(_passphrasePrompt)); + } else if ( (_publiclyReadable != null) && (_publiclyReadable.booleanValue()) ) { + genOpts.setOptValue("encryptContent", "false"); // if true, encrypt the content with a known read key for the channel + } + + if (_avatarFile != null) + genOpts.setOptValue("avatar", _avatarFile); + + if (_currentMessage.getWasPrivate()) + genOpts.setOptValue("postAsReply", "true"); // if true, the post should be encrypted to the channel's reply key + + if (_currentMessage.getPublicTags() != null) { + for (Iterator iter = _currentMessage.getPublicTags().iterator(); iter.hasNext(); ) + genOpts.addOptValue("pubTag", (String)iter.next()); + } + if (_currentMessage.getPrivateTags() != null) { + for (Iterator iter = _currentMessage.getPrivateTags().iterator(); iter.hasNext(); ) + genOpts.addOptValue("privTag", (String)iter.next()); + } + if (_referenceNodes.size() > 0) { + String refs = ReferenceNode.walk(_referenceNodes); + FileOutputStream fos = null; + try { + refFile = File.createTempFile("refs", "txt", tmpDir); + fos = new FileOutputStream(refFile); + fos.write(DataHelper.getUTF8(refs)); + fos.close(); + genOpts.setOptValue("refs", refFile.getPath()); + ui.debugMessage("Pulling refs from " + refFile.getPath()); + } catch (IOException ioe) { + ui.errorMessage("Error writing out the references", ioe); + ui.commandComplete(-1, null); + return; + } + } + //* (--cancel $uri)* // posts to be marked as cancelled (only honored if authorized to do so for those posts) + + // replace the $uri with the current post, if authorized to do so + if ( (_currentMessage.getOverwriteChannel() != null) && (_currentMessage.getOverwriteMessage() >= 0) ) + genOpts.setOptValue("overwrite", SyndieURI.createMessage(_currentMessage.getOverwriteChannel(), _currentMessage.getOverwriteMessage()).toString()); + + if ( (_parents != null) && (_parents.size() > 0) ) { + StringBuffer buf = new StringBuffer(); + for (int i = 0; i < _parents.size(); i++) { + SyndieURI uri = (SyndieURI)_parents.get(i); + buf.append(uri.toString()); + if (i + 1 < _parents.size()) + buf.append(","); + } + genOpts.setOptValue("references", buf.toString()); + } + + if (_currentMessage.getExpiration() > 0) + genOpts.setOptValue("expiration", _dayFmt.format(new Date(_currentMessage.getExpiration()))); + + genOpts.setOptValue("forceNewThread", ""+_currentMessage.getForceNewThread()); + genOpts.setOptValue("refuseReplies", ""+_currentMessage.getRefuseReplies()); + + genOpts.setOptValue("out", out); + + NestedUI nestedUI = new NestedUI(ui); + ui.debugMessage("generating with opts: " + genOpts); + cmd.runCommand(genOpts, nestedUI, client); + if (nestedUI.getExitCode() >= 0) { + // generated fine, so lets import 'er + ui.statusMessage("Message generated and written to " + out); + + Importer msgImp = new Importer(); + Opts msgImpOpts = new Opts(); + msgImpOpts.setOptValue("in", out); + if (_passphrase != null) + msgImpOpts.setOptValue("passphrase", CommandImpl.strip(_passphrase)); + msgImpOpts.setCommand("import"); + NestedUI dataNestedUI = new NestedUI(ui); + msgImp.runCommand(msgImpOpts, dataNestedUI, client); + if (dataNestedUI.getExitCode() < 0) { + ui.debugMessage("Failed in the nested import command"); + ui.commandComplete(dataNestedUI.getExitCode(), null); + } else { + ui.statusMessage("Post imported"); + ui.commandComplete(0, null); + resetContent(); + } + } else { + ui.errorMessage("Error generating the message"); + ui.commandComplete(nestedUI.getExitCode(), null); + } + + for (int i = 0; i < cfgFiles.size(); i++) + ((File)cfgFiles.get(i)).delete(); + if (refFile != null) + refFile.delete(); + } + + /** listreadkeys: display a list of known channel read keys that we can use to encrypt the message */ + private void processListReadKeys(DBClient client, UI ui, Opts opts) { + if (_currentMessage == null) { + ui.errorMessage("No post in progress"); + ui.commandComplete(-1, null); + return; + } + Hash channel = _currentMessage.getTargetChannel(); + List keys = client.getNymKeys(client.getLoggedInNymId(), client.getPass(), channel, Constants.KEY_FUNCTION_READ); + _listedNymKeys.clear(); + for (int i = 0; i < keys.size(); i++) { + NymKey key = (NymKey)keys.get(i); + ui.statusMessage("key " + _listedNymKeys.size() + ": " + key.getType() + " for " + key.getChannel().toBase64().substring(0,6) + " (authenticated? " + key.getAuthenticated() + ")"); + _listedNymKeys.add(key); + } + ui.commandComplete(0, null); + } + + private void processSet(DBClient client, UI ui, Opts opts) { + for (Iterator iter = opts.getOptNames().iterator(); iter.hasNext(); ) { + String opt = (String)iter.next(); + if ("readkey".equalsIgnoreCase(opt)) { + //set --readkey (public|$index) + if ("public".equalsIgnoreCase(opts.getOptValue(opt))) { + _publiclyReadable = Boolean.TRUE; + _currentMessage.setWasPassphraseProtected(false); + ui.statusMessage("Public read key selected"); + } else if ("pbe".equalsIgnoreCase(opts.getOptValue(opt))) { + _publiclyReadable = Boolean.FALSE; + _passphrase = opts.getOptValue("passphrase"); + _passphrasePrompt = opts.getOptValue("prompt"); + if ( (_passphrase == null) || (_passphrasePrompt == null) ) { + ui.errorMessage("You must specify a --passphrase and a --prompt to use the passphrase base encryption"); + ui.commandComplete(-1, null); + return; + } + _publiclyReadable = Boolean.FALSE; + _currentMessage.setWasPassphraseProtected(true); + ui.statusMessage("Passphrase based read key generated"); + } else { + int index = (int)opts.getOptLong(opt, -1); + if ( (index >= 0) && (index < _listedNymKeys.size()) ) { + Object o = _listedNymKeys.get(index); + if (o instanceof NymKey) { + _readKey = new SessionKey(((NymKey)o).getData()); + _publiclyReadable = Boolean.FALSE; + _currentMessage.setWasPassphraseProtected(false); + ui.statusMessage("Read key selected"); + } else { + ui.errorMessage("Please call listreadkeys before using set --readkey"); + ui.commandComplete(-1, null); + return; + } + } else { + ui.errorMessage("Read key index out of range - please use a valid number or 'public'"); + ui.commandComplete(-1, null); + return; + } + } + } else if ("messageId".equalsIgnoreCase(opt)) { + // set --messageId ($id|date) : specify the message Id, or if 'date', generate one based on the date + if ("date".equalsIgnoreCase(opts.getOptValue(opt))) { + _currentMessage.setMessageId(createEdition(client)); + ui.statusMessage("MessageId randomized based on the date and set to " + _currentMessage.getMessageId()); + } else { + long id = opts.getOptLong(opt, -1); + if (id >= 0) { + _currentMessage.setMessageId(id); + ui.statusMessage("MessageId set to " + id); + } else { + ui.errorMessage("Invalid message id requested - please specify a number or the value 'date'"); + ui.commandComplete(-1, null); + return; + } + } + } else if ("subject".equalsIgnoreCase(opt)) { + // set --subject $subject : specify the message subject + _currentMessage.setSubject(CommandImpl.strip(opts.getOptValue(opt))); + ui.statusMessage("Subject set to " + _currentMessage.getSubject()); + } else if ("avatar".equalsIgnoreCase(opt)) { + // set --avatar $filename : specify a message-specific avatar to use + _avatarFile = opts.getOptValue(opt); + File f = new File(_avatarFile); + if (f.exists()) { + if (f.length() > Constants.MAX_AVATAR_SIZE) { + ui.errorMessage("Avatar file requested is too large (" + f.length() + ", max size " + Constants.MAX_AVATAR_SIZE + ")"); + ui.commandComplete(-1, null); + return; + } + ui.statusMessage("Message-specific avatar selected"); + } else { + ui.errorMessage("Avatar file requested does not exist (" + _avatarFile + ")"); + ui.commandComplete(-1, null); + _avatarFile = null; + return; + } + } else if ("encryptToReply".equalsIgnoreCase(opt)) { + // set --encryptToReply $boolean + _currentMessage.setWasPrivate(opts.getOptBoolean(opt, _currentMessage.getWasPrivate())); + if (_currentMessage.getWasPrivate()) + ui.statusMessage("Message will be encrypted to the channel owner's reply key"); + else + ui.statusMessage("Message will be encrypted as a normal channel post"); + } else if ("overwrite".equalsIgnoreCase(opt)) { + // set --overwrite $uri + try { + SyndieURI uri = new SyndieURI(opts.getOptValue(opt)); + if ( (uri.getScope() == null) || (uri.getMessageId() == null) ) { + ui.errorMessage("You can only overwrite syndie messages"); + ui.commandComplete(-1, null); + return; + } + _currentMessage.setOverwriteChannel(uri.getScope()); + _currentMessage.setOverwriteMessage(uri.getMessageId().longValue()); + ui.statusMessage("Post set to overwrite " + uri.getScope().toBase64() + ":" + uri.getMessageId().longValue()); + } catch (URISyntaxException use) { + ui.errorMessage("Invalid syndie overwrite URI: " + opts.getOptValue(opt), use); + ui.commandComplete(-1, null); + return; + } + } else if ("expiration".equalsIgnoreCase(opt)) { + // set --expiration ($yyyyMMdd|none) : suggest a date on which the message can be discarded + String val = opts.getOptValue(opt); + if ("none".equalsIgnoreCase(val)) { + _currentMessage.setExpiration(-1); + ui.statusMessage("Post configured to have no expiration"); + } else { + try { + Date when = _dayFmt.parse(val); + _currentMessage.setExpiration(when.getTime()); + ui.statusMessage("Post configured with a suggested expiration of " + val); + } catch (ParseException pe) { + ui.errorMessage("Invalid expiration requested (please specify YYYYMMDD)", pe); + ui.commandComplete(-1, null); + return; + } + } + } else if ("forceNewThread".equalsIgnoreCase(opt)) { + _currentMessage.setForceNewThread(opts.getOptBoolean(opt, _currentMessage.getForceNewThread())); + ui.statusMessage("Post " + (_currentMessage.getForceNewThread() ? "will " : "will not") + + " force a new discussion thread to be started"); + } else if ("refuseReplies".equalsIgnoreCase(opt)) { + _currentMessage.setRefuseReplies(opts.getOptBoolean(opt, _currentMessage.getRefuseReplies())); + ui.statusMessage("Post " + (_currentMessage.getForceNewThread() ? "will " : "will not") + + " allow other people to reply to it directly"); + } else if ("publicTags".equalsIgnoreCase(opt)) { + String tags = opts.getOptValue(opt); + Set pubTags = new HashSet(); + while (tags != null) { + int split = tags.indexOf(','); + if (split < 0) { + pubTags.add(CommandImpl.strip(tags.trim())); + tags = null; + } else if (split == 0) { + tags = tags.substring(1); + } else { + String tag = CommandImpl.strip(tags.substring(0, split).trim()); + if (tag.length() > 0) + pubTags.add(tag); + tags = tags.substring(split+1); + } + } + _currentMessage.setPublicTags(pubTags); + } else if ("privateTags".equalsIgnoreCase(opt)) { + String tags = opts.getOptValue(opt); + Set privTags= new HashSet(); + while (tags != null) { + int split = tags.indexOf(','); + if (split < 0) { + privTags.add(CommandImpl.strip(tags.trim())); + tags = null; + } else if (split == 0) { + tags = tags.substring(1); + } else { + String tag = CommandImpl.strip(tags.substring(0, split).trim()); + if (tag.length() > 0) + privTags.add(tag); + tags = tags.substring(split+1); + } + } + _currentMessage.setPrivateTags(privTags); + } + } + ui.commandComplete(0, null); + } + + public static void main(String args[]) { + String rootDir = TextEngine.getRootPath(); + TextUI ui = new TextUI(true); + TextEngine engine = new TextEngine(rootDir, ui); + ui.insertCommand("login"); + ui.insertCommand("menu post"); + ui.insertCommand("channels"); + ui.insertCommand("create --channel 1"); + ui.insertCommand("listauthkeys"); + ui.insertCommand("authenticate 5"); + ui.insertCommand("authorize 0"); + ui.insertCommand("set --readkey pbe --passphrase 'you smell' --prompt 'do i smell?'"); + ui.insertCommand("execute"); + engine.run(); + } +} diff --git a/src/syndie/db/ReadMenu.java b/src/syndie/db/ReadMenu.java new file mode 100644 index 0000000..82de580 --- /dev/null +++ b/src/syndie/db/ReadMenu.java @@ -0,0 +1,1648 @@ +package syndie.db; + +import java.io.File; +import java.io.FileInputStream; +import java.io.FileOutputStream; +import java.io.IOException; +import java.net.URISyntaxException; +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.text.SimpleDateFormat; +import java.util.*; +import net.i2p.data.*; +import net.i2p.crypto.KeyGenerator; +import syndie.Constants; +import syndie.data.ChannelInfo; +import syndie.data.MessageInfo; +import syndie.data.ReferenceNode; +import syndie.data.SyndieURI; + +/** + * + */ +class ReadMenu implements TextEngine.Menu { + private TextEngine _engine; + /** text description of each channel */ + private List _channelText; + /** text description of each message */ + private List _messageText; + /** internal channel ids (Long) */ + private List _channelKeys; + /** internal message ids (Long) */ + private List _messageKeys; + /** next channel the user should be shown */ + private int _channelIteratorIndex; + /** next message the user should be shown */ + private int _messageIteratorIndex; + /** current channel the user is in (if any) */ + private ChannelInfo _currentChannel; + /** current message in the current channel that the user is reviewing (if any) */ + private MessageInfo _currentMessage; + /** root of the current message's thread */ + private ReferenceNode _currentThreadRoot; + /** SyndieURI for each of the threads matching the most recent 'threads' command */ + private List _threadRootURIs; + /** text describing each of the _threadRootURIs */ + private List _threadText; + + public ReadMenu(TextEngine engine) { + _engine = engine; + _messageText = new ArrayList(); + _channelText = new ArrayList(); + _messageKeys = new ArrayList(); + _channelKeys = new ArrayList(); + _threadRootURIs = new ArrayList(); + _threadText = new ArrayList(); + _messageIteratorIndex = 0; + _channelIteratorIndex = 0; + _currentChannel = null; + _currentMessage = null; + _currentThreadRoot = null; + } + + public static final String NAME = "read"; + public String getName() { return NAME; } + public String getDescription() { return "read menu"; } + public boolean requireLoggedIn() { return true; } + public void listCommands(UI ui) { + ui.statusMessage(" channels [--unreadOnly $boolean] [--name $name] [--hash $hashPrefix]"); + ui.statusMessage(" : lists channels matching the given criteria"); + if ( (_messageKeys.size() > 0) || (_channelKeys.size() > 0) ) { + ui.statusMessage(" next [--lines $num] : iterate through the channels/messages"); + ui.statusMessage(" prev [--lines $num] : iterate through the channels/messages"); + } + ui.statusMessage(" meta [--channel ($index|$hash)] : display the channel's metadata"); + ui.statusMessage(" messages [--channel ($index|$hash)] [--includeUnauthorized $boolean]"); + ui.statusMessage(" [--includeUnauthenticated $boolean]"); + ui.statusMessage(" : lists messages matching the given criteria"); + ui.statusMessage(" threads [--channel ($index|$hash|all)] [-tags [-]tag[,[-]tag]*]"); + ui.statusMessage(" [--includeUnauthorized $boolean] [--compact $boolean]"); + ui.statusMessage(" : Display a list of threads matching the given criteria. The "); + ui.statusMessage(" : tags parameter picks threads where at least one message has"); + ui.statusMessage(" : each of the tags, and that none of the messages have any of the"); + ui.statusMessage(" : tags prefaced by -"); + ui.statusMessage(" view [(--message ($index|$uri)|--thread $index)] [--page $n]"); + ui.statusMessage(" : view a page in the given message"); + if (_currentMessage != null) { + ui.statusMessage(" threadnext [--position $position]"); + ui.statusMessage(" : view the next message in the thread (or the given"); + ui.statusMessage(" : thread position)"); + ui.statusMessage(" threadprev [--position $position]"); + ui.statusMessage(" : view the previous message in the thread (or the given"); + ui.statusMessage(" : thread position)"); + ui.statusMessage(" importkey --position $position"); + ui.statusMessage(" : import the key included in the given message reference"); + } + ui.statusMessage(" export [--message ($index|$uri)] --out $directory"); + ui.statusMessage(" : dump the full set of pages/attachments/status to the"); + ui.statusMessage(" : specified directory"); + ui.statusMessage(" save [--message ($index|$uri)] (--page $n|--attachment $n) --out $filename"); + ui.statusMessage(" : save just the specified page/attachment to the given file"); + if (_currentMessage != null) { + ui.statusMessage(" reply : jump to the post menu, prepopulating the --references field"); + } + if ( (_currentChannel != null) || (_currentMessage != null) ) { + ui.statusMessage(" ban [--scope (author|channel|$hash)] [--delete $boolean]"); + ui.statusMessage(" : ban the author or channel so that no more posts from that author"); + ui.statusMessage(" : or messages by any author in that channel will be allowed into the"); + ui.statusMessage(" : Syndie archive. If --delete is specified, the messages themselves"); + ui.statusMessage(" : will be removed from the archive as well as the database"); + ui.statusMessage(" decrypt [(--message $msgId|--channel $channelId)] [--passphrase pass]"); + ui.statusMessage(" : attempt to decrypt the specified channel metadata or message for"); + ui.statusMessage(" : those that could not be decrypted earlier"); + ui.statusMessage(" watch (--author $true|--channel $true) [--nickname $name]"); + ui.statusMessage(" [--category $nameInWatchedTree]"); + } + } + public boolean processCommands(DBClient client, UI ui, Opts opts) { + String cmd = opts.getCommand(); + if ("channels".equalsIgnoreCase(cmd)) { + processChannels(client, ui, opts); + } else if ("next".equalsIgnoreCase(cmd)) { + processNext(client, ui, opts); + } else if ("prev".equalsIgnoreCase(cmd)) { + processPrev(client, ui, opts); + } else if ("meta".equalsIgnoreCase(cmd)) { + processMeta(client, ui, opts); + } else if ("messages".equalsIgnoreCase(cmd)) { + processMessages(client, ui, opts); + } else if ("threads".equalsIgnoreCase(cmd)) { + processThreads(client, ui, opts); + } else if ("view".equalsIgnoreCase(cmd)) { + processView(client, ui, opts); + } else if ("threadnext".equalsIgnoreCase(cmd)) { + processThreadNext(client, ui, opts); + } else if ("threadprev".equalsIgnoreCase(cmd)) { + processThreadPrev(client, ui, opts); + } else if ("importkey".equalsIgnoreCase(cmd)) { + processImportKey(client, ui, opts); + } else if ("export".equalsIgnoreCase(cmd)) { + processExport(client, ui, opts); + } else if ("save".equalsIgnoreCase(cmd)) { + processSave(client, ui, opts); + } else if ("reply".equalsIgnoreCase(cmd)) { + processReply(client, ui, opts); + } else if ("ban".equalsIgnoreCase(cmd)) { + processBan(client, ui, opts); + } else if ("decrypt".equalsIgnoreCase(cmd)) { + processDecrypt(client, ui, opts); + } else if ("watch".equalsIgnoreCase(cmd)) { + notImplementedYet(ui); + } else { + return false; + } + return true; + } + private void notImplementedYet(UI ui) { + ui.statusMessage("Command not implemented yet"); + } + public List getMenuLocation(DBClient client, UI ui) { + ArrayList rv = new ArrayList(); + rv.add("read"); + + if (_currentMessage != null) { + long chanId = client.getChannelId(_currentMessage.getTargetChannel()); + // we refetch the channel so when we bounce around scopes within a single + //thread, it looks less confusing + ChannelInfo chan = client.getChannel(chanId); + rv.add("chan '" + chan.getName() + "'/" + chan.getChannelHash().toBase64().substring(0,6)); + rv.add("msg " + _currentMessage.getMessageId()); + } else if (_currentChannel != null) { + rv.add("chan '" + _currentChannel.getName() + "'/" + _currentChannel.getChannelHash().toBase64().substring(0,6)); + if (_messageKeys.size() > 0) + rv.add("message list"); + } else if (_channelKeys.size() > 0) { + rv.add("channel list"); + } + return rv; + } + + private static final SimpleDateFormat _dayFmt = new SimpleDateFormat("yyyy/MM/dd"); + private static final String SQL_LIST_CHANNELS = "SELECT channelId, channelHash, name, description, COUNT(msgId), MAX(messageId) FROM channel LEFT OUTER JOIN channelMessage ON channelId = targetChannelId GROUP BY channelId, name, description, channelHash"; + /** channels [--unreadOnly $boolean] [--name $name] [--hash $hashPrefix] */ + private void processChannels(DBClient client, UI ui, Opts opts) { + _channelIteratorIndex = 0; + _channelKeys.clear(); + _channelText.clear(); + _messageIteratorIndex = 0; + _messageKeys.clear(); + _messageText.clear(); + _currentChannel = null; + _currentMessage = null; + _currentThreadRoot = null; + + boolean unreadOnly = opts.getOptBoolean("unreadOnly", false); + if (unreadOnly) { + ui.statusMessage("Ignoring the unreadOnly flag, as it is not yet supported"); + unreadOnly = false; + } + String name = opts.getOptValue("name"); + String prefix = opts.getOptValue("hash"); + + Connection con = client.con(); + PreparedStatement stmt = null; + ResultSet rs = null; + try { + stmt = con.prepareStatement(SQL_LIST_CHANNELS); + rs = stmt.executeQuery(); + while (rs.next()) { + // "channelId, channelHash, name, description"; + long id = rs.getLong(1); + if (rs.wasNull()) + continue; + byte hash[] = rs.getBytes(2); + if (hash == null) + continue; + String curName = rs.getString(3); + String desc = rs.getString(4); + long numMessages = rs.getLong(5); + long mostRecentMsg = rs.getLong(6); + String b64 = Base64.encode(hash); + + if (name != null) { + if (curName == null) + continue; + else if (!curName.startsWith(name)) + continue; + } + + if (prefix != null) { + if (!b64.startsWith(prefix)) + continue; + } + + // ok, matches criteria + _channelKeys.add(new Long(id)); + StringBuffer buf = new StringBuffer(); + + ChannelInfo chan = client.getChannel(id); + if (chan.getReadKeyUnknown()) { + buf.append("(undecrypted metadata)\n\tuse 'decrypt --channel "); + buf.append(_channelKeys.size()-1).append("' to decrypt"); + } else if (chan.getPassphrasePrompt() != null) { + buf.append("(undecrypted metadata) - prompt: \""); + buf.append(CommandImpl.strip(chan.getPassphrasePrompt())).append("\""); + buf.append("\n\tuse 'decrypt --channel "); + buf.append(_channelKeys.size()-1).append(" --passphrase $passphrase' to decrypt"); + } else { + if (curName != null) + buf.append('\'').append(CommandImpl.strip(curName)).append("\' "); + buf.append("(").append(b64.substring(0,6)).append(") "); + if (desc != null) + buf.append("- ").append(CommandImpl.strip(desc)); + buf.append(" messages: ").append(numMessages); + if (numMessages > 0) { + String when = null; + synchronized (_dayFmt) { + when = _dayFmt.format(new Date(mostRecentMsg)); + } + buf.append(" last post on ").append(when); + } + } + _channelText.add(buf.toString()); + } + ui.statusMessage(_channelKeys.size() + " channels matched - use 'next' to view them"); + ui.commandComplete(0, null); + } catch (SQLException se) { + ui.errorMessage("Internal error listing channels", se); + ui.commandComplete(-1, null); + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + + /** next [--lines $num] : iterate through the channels/messages */ + private void processNext(DBClient client, UI ui, Opts opts) { + int num = (int)opts.getOptLong("lines", 10); + if (_messageKeys.size() > 0) { + // list messages + if (_messageIteratorIndex >= _messageKeys.size()) { + ui.statusMessage("No more messages - use 'prev' to review earlier messages"); + ui.commandComplete(0, null); + } else { + int end = Math.min(_messageIteratorIndex+num, _messageKeys.size()); + ui.statusMessage("message " + _messageIteratorIndex + " through " + (end-1) + " of " + (_messageKeys.size()-1)); + while (_messageIteratorIndex < end) { + String desc = (String)_messageText.get(_messageIteratorIndex); + ui.statusMessage(_messageIteratorIndex + ": " + desc); + _messageIteratorIndex++; + } + int remaining = _messageKeys.size() - _messageIteratorIndex; + if (remaining > 0) + ui.statusMessage(remaining + " messages remaining"); + else + ui.statusMessage("No more messages - use 'prev' to review earlier messages"); + ui.commandComplete(0, null); + } + } else { + // list channels + if (_channelIteratorIndex >= _channelKeys.size()) { + ui.statusMessage("No more channels - use 'prev' to review earlier channels"); + ui.commandComplete(0, null); + } else { + int end = Math.min(_channelIteratorIndex+num, _channelKeys.size()); + ui.statusMessage("channel " + _channelIteratorIndex + " through " + (end-1) + " of " + (_channelKeys.size()-1)); + while (_channelIteratorIndex < end) { + String desc = (String)_channelText.get(_channelIteratorIndex); + ui.statusMessage(_channelIteratorIndex + ": " + desc); + _channelIteratorIndex++; + } + int remaining = _channelKeys.size() - _channelIteratorIndex; + if (remaining > 0) + ui.statusMessage(remaining + " channels remaining"); + else + ui.statusMessage("No more channels - use 'prev' to review earlier channels"); + ui.commandComplete(0, null); + } + } + } + + /** prev [--lines $num] : iterate through the channels/messages */ + private void processPrev(DBClient client, UI ui, Opts opts) { + int num = (int)opts.getOptLong("lines", 10); + int index = 0; + if (_messageKeys.size() > 0) { + _messageIteratorIndex -= num; + if (_messageIteratorIndex < 0) + _messageIteratorIndex = 0; + } else { + _channelIteratorIndex -= num; + if (_channelIteratorIndex < 0) + _channelIteratorIndex = 0; + } + processNext(client, ui, opts); + } + + private void processMeta(DBClient client, UI ui, Opts opts) { + long channelIndex = -1; + Hash channel = null; + String chan = opts.getOptValue("channel"); + if (chan != null) { + try { + long val = Long.parseLong(chan); + channelIndex = val; + } catch (NumberFormatException nfe) { + ui.debugMessage("channel requested is not an index (" + chan + ")"); + // ok, not an integer, maybe its a full channel hash? + byte val[] = Base64.decode(chan); + if ( (val != null) && (val.length == Hash.HASH_LENGTH) ) { + channel = new Hash(val); + ui.debugMessage("channel requested is a hash (" + channel.toBase64() + ")"); + } else { + ui.errorMessage("Channel requested is not valid - either specify --channel $index or --channel $base64(channelHash)"); + ui.commandComplete(-1, null); + return; + } + } + } + + long channelId = -1; + if ( (channelIndex >= 0) && (channelIndex < _channelKeys.size()) ) { + channelId = ((Long)_channelKeys.get((int)channelIndex)).longValue(); + _currentChannel = client.getChannel(channelId); + } else if (channel != null) { + channelId = client.getChannelId(channel); + _currentChannel = client.getChannel(channelId); + } + + if (_currentChannel == null) { + ui.debugMessage("channelIndex=" + channelIndex + " channelKeySize: " + _channelKeys.size()); + ui.debugMessage("channel=" + channelIndex); + ui.errorMessage("Invalid or unknown channel requested"); + ui.commandComplete(-1, null); + return; + } + + ui.statusMessage(_currentChannel.toString()); + } + + // $index\t$date\t$subject\t$author + private static final String SQL_LIST_MESSAGES = "SELECT msgId, messageId, subject, authorChannelId FROM channelMessage WHERE targetChannelId = ? AND wasPrivate = FALSE AND isCancelled = FALSE"; + /** messages [--channel ($index|$hash)] [--includeUnauthorized $boolean] [--includeUnauthenticated $boolean] */ + private void processMessages(DBClient client, UI ui, Opts opts) { + boolean unauthorized = opts.getOptBoolean("includeUnauthorized", false); + //unauthenticated included by default, since undecrypted posts are + //unauthenticated until successful decryption (and unauthenticated posts + //are only imported if they are authorized) + boolean unauthenticated = opts.getOptBoolean("includeUnauthenticated", true); + long channelIndex = -1; + Hash channel = null; + String chan = opts.getOptValue("channel"); + if (chan == null) { + if (_currentChannel != null) + chan = _currentChannel.getChannelHash().toBase64(); + } + try { + long val = Long.parseLong(chan); + channelIndex = val; + } catch (NumberFormatException nfe) { + ui.debugMessage("channel requested is not an index (" + chan + ")"); + // ok, not an integer, maybe its a full channel hash? + byte val[] = Base64.decode(chan); + if ( (val != null) && (val.length == Hash.HASH_LENGTH) ) { + channel = new Hash(val); + ui.debugMessage("channel requested is a hash (" + channel.toBase64() + ")"); + } else { + ui.errorMessage("Channel requested is not valid - either specify --channel $index or --channel $base64(channelHash)"); + ui.commandComplete(-1, null); + return; + } + } + + long channelId = -1; + if ( (channelIndex >= 0) && (channelIndex < _channelKeys.size()) ) { + channelId = ((Long)_channelKeys.get((int)channelIndex)).longValue(); + _currentChannel = client.getChannel(channelId); + } else if (channel != null) { + channelId = client.getChannelId(channel); + _currentChannel = client.getChannel(channelId); + } + + if ( (channelId < 0) || (_currentChannel == null) ) { + ui.debugMessage("channelIndex=" + channelIndex + " itemKeySize: " + _channelKeys.size()); + ui.debugMessage("channel=" + channelIndex); + ui.debugMessage("currentChannel=" + _currentChannel); + ui.errorMessage("Invalid or unknown channel requested"); + ui.commandComplete(-1, null); + return; + } + + _messageIteratorIndex = 0; + _messageKeys.clear(); + _messageText.clear(); + + if (_currentChannel.getReadKeyUnknown()) { + ui.errorMessage("Channel metadata could not be read, as you did not have the correct channel read key"); + ui.errorMessage("To try and decrypt the metadata, use 'decrypt'"); + // technically, we don't have to return, and can list the readable and unreadable messages in the + // channel, but its probably best not to + return; + } else if (_currentChannel.getPassphrasePrompt() != null) { + ui.errorMessage("Channel metadata could not be read, as you have not specified the"); + ui.errorMessage("correct passphrase. The passphrase prompt is " + CommandImpl.strip(_currentChannel.getPassphrasePrompt())); + ui.errorMessage("To try and decrypt the metadata, use 'decrypt --passphrase \"the correct passphrase\"'"); + // technically, we don't have to return, and can list the readable and unreadable messages in the + // channel, but its probably best not to + return; + } + + List privMsgIds = client.getMessageIdsPrivate(_currentChannel.getChannelHash()); + for (int i = 0; i < privMsgIds.size(); i++) { + Long msgId = (Long)privMsgIds.get(i); + _messageKeys.add(msgId); + MessageInfo msg = client.getMessage(msgId.longValue()); + StringBuffer buf = new StringBuffer(); + String date = null; + synchronized (_dayFmt) { + date = _dayFmt.format(new Date(msg.getMessageId())); + } + if (msg.getReplyKeyUnknown() || msg.getReadKeyUnknown()) { + buf.append("(undecrypted private message)\n\tuse 'decrypt --message "); + buf.append(_messageKeys.size()-1).append("' to decrypt"); + } else if (msg.getPassphrasePrompt() != null) { + buf.append("(undecrypted private message) - prompt: \""); + buf.append(CommandImpl.strip(msg.getPassphrasePrompt())); + buf.append("\"\n\tuse 'decrypt --message "); + buf.append(_messageKeys.size()-1).append(" --passphrase $passphrase' to decrypt"); + } else { + buf.append("(Private message) "); + buf.append('[').append(date).append("] "); + if (msg.getSubject() != null) + buf.append('\'').append(CommandImpl.strip(msg.getSubject())).append("\' "); + else + buf.append("(no subject) "); + if (msg.getAuthorChannelId() >= 0) { + ChannelInfo chanInfo = client.getChannel(msg.getAuthorChannelId()); + buf.append(" written by "); + if (chanInfo != null) { + buf.append(chanInfo.getName()).append(" "); + buf.append("[").append(chanInfo.getChannelHash().toBase64().substring(0,6)).append("] "); + } + } + } + _messageText.add(buf.toString()); + } + + Connection con = client.con(); + PreparedStatement stmt = null; + ResultSet rs = null; + try { + String sql = SQL_LIST_MESSAGES; + if (!unauthorized) + sql = sql + " AND wasAuthorized = TRUE"; + if (!unauthenticated) + sql = sql + " AND wasAuthenticated = TRUE"; + stmt = con.prepareStatement(sql); + stmt.setLong(1, channelId); + ui.debugMessage("query: " + sql + " (channelId = " + channelId + ")"); + rs = stmt.executeQuery(); + while (rs.next()) { + // msgId, messageId, subject, authorChannelHash + long id = rs.getLong(1); + if (rs.wasNull()) + continue; + Long messageId = new Long(rs.getLong(2)); + if (rs.wasNull()) + messageId = null; + String subject = rs.getString(3); + long authorChannelId = rs.getLong(4); + if (rs.wasNull()) authorChannelId = -1; + //byte hash[] = rs.getBytes(4); + + // ok, matches criteria + _messageKeys.add(new Long(id)); + StringBuffer buf = new StringBuffer(); + String date = null; + if (messageId != null) { + synchronized (_dayFmt) { + date = _dayFmt.format(new Date(messageId.longValue())); + } + } + + MessageInfo msg = client.getMessage(id); + if (msg.getReplyKeyUnknown() || msg.getReadKeyUnknown()) { + buf.append("(undecrypted message)\n\tuse 'decrypt --message "); + buf.append(_messageKeys.size()-1).append("' to decrypt"); + } else if (msg.getPassphrasePrompt() != null) { + buf.append("(undecrypted message) - prompt: \""); + buf.append(CommandImpl.strip(msg.getPassphrasePrompt())); + buf.append("\"\n\tuse 'decrypt --message "); + buf.append(_messageKeys.size()-1).append(" --passphrase $passphrase' to decrypt"); + } else { + if (date == null) + buf.append("[????/??/??] "); + else + buf.append('[').append(date).append("] "); + if (subject != null) + buf.append('\'').append(CommandImpl.strip(subject)).append("\' "); + else + buf.append("(no subject) "); + if (authorChannelId >= 0) { + ChannelInfo info = client.getChannel(authorChannelId); + buf.append(" written by "); + if (info != null) { + buf.append(info.getName()).append(" "); + buf.append("[").append(info.getChannelHash().toBase64().substring(0,6)).append("] "); + } + } + } + _messageText.add(buf.toString()); + } + ui.statusMessage(_messageKeys.size() + " messages matched - use 'next' to view them"); + ui.commandComplete(0, null); + } catch (SQLException se) { + ui.errorMessage("Internal error listing messages", se); + ui.commandComplete(-1, null); + } finally { + if (rs != null) try { rs.close(); } catch (SQLException se) {} + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + + /** threads [--channel ($index|$hash|all)] [--tags [-]tag[,[-]tag]*] [--includeUnauthorized $boolean] [--compact $boolean]*/ + private void processThreads(DBClient client, UI ui, Opts opts) { + String chan = opts.getOptValue("channel"); + List tags = opts.getOptValues("tags"); + boolean includeUnauthorized = opts.getOptBoolean("includeUnauthorized", false); + boolean compact = opts.getOptBoolean("compact", true); + if ( (opts.getOptNames().size() <= 0) && (_threadText.size() > 0) ) { + // just display the last result set + for (int i = 0; i < _threadText.size(); i++) { + String line = (String)_threadText.get(i); + ui.statusMessage(line); + } + ui.statusMessage("Matching threads: " + _threadText.size()); + } else { + // recalc the results + _threadRootURIs.clear(); + _threadText.clear(); + + Set channelHashes = new HashSet(); + if (chan == null) { + if (_currentChannel != null) { + channelHashes.add(_currentChannel.getChannelHash()); + } else { + ui.errorMessage("To view threads in all channels, specify --channel all"); + ui.commandComplete(-1, null); + return; + } + } else { + byte chanHash[] = opts.getOptBytes("channel"); + if ( (chanHash != null) && (chanHash.length == Hash.HASH_LENGTH) ) { + channelHashes.add(new Hash(chanHash)); + } else if ("all".equalsIgnoreCase(chan)) { + channelHashes = null; + } else { + try { + int index = Integer.parseInt(chan); + if ( (index >= 0) && (index < _channelKeys.size()) ) { + Long chanId = (Long)_channelKeys.get(index); + ChannelInfo info = client.getChannel(chanId.longValue()); + channelHashes.add(info.getChannelHash()); + } else { + ui.errorMessage("Index is out of range"); + ui.commandComplete(-1, null); + } + } catch (NumberFormatException nfe) { + ui.errorMessage("Invalid channel index"); + ui.commandComplete(-1, null); + } + } + } + + Set tagsRequired = new HashSet(); + Set tagsRejected = new HashSet(); + if (tags != null) { + for (int i = 0; i < tags.size(); i++) { + String tag = (String)tags.get(i); + if (tag.startsWith("-")) + tagsRejected.add(tag.substring(1)); + else + tagsRequired.add(tag); + } + } + + ui.debugMessage("Channels: " + (channelHashes == null ? "ALL" : channelHashes.toString())); + ui.debugMessage("Required tags: " + tagsRequired.toString()); + ui.debugMessage("Rejected tags: " + tagsRejected.toString()); + + ThreadAccumulator accumulator = new ThreadAccumulator(client, ui); + accumulator.gatherThreads(channelHashes, tagsRequired, tagsRejected); + Map order = new TreeMap(new HighestFirstComparator()); + for (int i = 0; i < accumulator.getThreadCount(); i++) { + long mostRecentDate = accumulator.getMostRecentDate(i); + Long when = new Long(mostRecentDate); + while (order.containsKey(when)) + when = new Long(when.longValue()+1); + order.put(when, new Integer(i)); + } + for (Iterator iter = order.values().iterator(); iter.hasNext(); ) { + int i = ((Integer)iter.next()).intValue(); + SyndieURI rootURI = accumulator.getRootURI(i); + _threadRootURIs.add(rootURI); + Set threadTags = accumulator.getTags(i); + int messages = accumulator.getMessages(i); + String subject = accumulator.getSubject(i); + long rootAuthorId = accumulator.getRootAuthor(i); + long mostRecentAuthorId = accumulator.getMostRecentAuthor(i); + long mostRecentDate = accumulator.getMostRecentDate(i); + + ChannelInfo rootAuthor = client.getChannel(rootAuthorId); + ChannelInfo mostRecentAuthor = client.getChannel(mostRecentAuthorId); + + StringBuffer buf = new StringBuffer(); + if (compact) { + // 10: [2006/10/09 2 msgs] $subject (tag, tag, tag, tag) + buf.append(_threadText.size()).append(": ["); + synchronized (_dayFmt) { + buf.append(_dayFmt.format(new Date(mostRecentDate))); + } + buf.append(" ").append(messages); + if (messages > 1) + buf.append(" msgs] "); + else + buf.append(" msg ] "); + buf.append(CommandImpl.strip(subject)); + if (threadTags.size() > 0) { + buf.append(" ["); + for (Iterator titer = threadTags.iterator(); titer.hasNext(); ) { + String tag = (String)titer.next(); + buf.append(CommandImpl.strip(tag)); + int count = accumulator.getTagCount(i, tag); + if (count > 1) + buf.append("#").append(count); + buf.append(" "); + } + buf.append("]"); + } + } else { + buf.append(_threadText.size()).append(": ").append(CommandImpl.strip(subject)); + buf.append("\n\tOriginal author: "); + if (rootAuthor.getName() != null) + buf.append(CommandImpl.strip(rootAuthor.getName())).append(" "); + buf.append("(").append(rootAuthor.getChannelHash().toBase64().substring(0,6)).append(")"); + if (messages > 1) { + buf.append("\n\tLast reply by "); + if (mostRecentAuthor.getName() != null) + buf.append(CommandImpl.strip(mostRecentAuthor.getName())).append(" "); + buf.append("(").append(mostRecentAuthor.getChannelHash().toBase64().substring(0,6)).append(")"); + } + buf.append("\n\tPost date: "); + synchronized (_dayFmt) { + buf.append(_dayFmt.format(new Date(mostRecentDate))); + } + if (messages > 1) + buf.append("\n\t" + messages + " messages"); + if (threadTags.size() > 0) { + buf.append("\n\tTags: "); + for (Iterator titer = threadTags.iterator(); titer.hasNext(); ) { + String tag = (String)titer.next(); + buf.append(CommandImpl.strip(tag)); + int count = accumulator.getTagCount(i, tag); + if (count > 1) + buf.append("#").append(count); + buf.append(" "); + } + } + } + String line = buf.toString(); + _threadText.add(line); + ui.statusMessage(line); + } + ui.statusMessage("Matching threads: " + _threadText.size()); + } + ui.commandComplete(0, null); + } + + private static final class HighestFirstComparator implements Comparator { + public int compare(Object lhs, Object rhs) { + if (lhs instanceof Long) + return -1*((Long)lhs).compareTo((Long)rhs); + else + return -1*((Integer)lhs).compareTo((Integer)rhs); + } + + } + + /** view [(--message ($index|$uri)|--thread $index)] [--page $n] : view a page in the given message */ + private void processView(DBClient client, UI ui, Opts opts) { + boolean rebuildThread = opts.getOptBoolean("rebuildThread", true); + String msg = opts.getOptValue("message"); + + int threadIndex = (int)opts.getOptLong("thread", -1); + if (threadIndex >= 0) { + if (threadIndex >= _threadRootURIs.size()) { + ui.errorMessage("Thread index is out of bounds"); + ui.commandComplete(-1, null); + return; + } + SyndieURI uri = (SyndieURI)_threadRootURIs.get(threadIndex); + msg = uri.toString(); + } + + if (msg != null) { + int index = -1; + try { + index = Integer.parseInt(msg); + if ( (index >= 0) && (index < _messageKeys.size()) ) { + long msgId = ((Long)_messageKeys.get(index)).longValue(); + _currentMessage = client.getMessage(msgId); + if (rebuildThread) + _currentThreadRoot = null; + } else { + ui.errorMessage("Requested message index is out of range"); + ui.commandComplete(-1, null); + } + } catch (NumberFormatException nfe) { + try { + SyndieURI uri = new SyndieURI(msg); + long chanId = client.getChannelId(uri.getScope()); + if (chanId >= 0) { + _currentChannel = client.getChannel(chanId); + _currentMessage = client.getMessage(chanId, uri.getMessageId()); + if (rebuildThread) + _currentThreadRoot = null; + if (_currentMessage != null) { + // ok, switched over + } else { + ui.statusMessage("Switched over to the specified channel, but the requested message was not known (" + uri.getMessageId() + ")"); + ui.commandComplete(0, null); + return; + } + } else { + ui.statusMessage("The message requested is not in a locally known channel (" + uri.getScope() + ")"); + ui.commandComplete(0, null); + return; + } + } catch (URISyntaxException use) { + ui.errorMessage("The requested message is neither an index to the message list or a full syndie URI"); + ui.commandComplete(-1, null); + return; + } + } + } + + if (_currentMessage == null) { + ui.errorMessage("Current message is null"); + ui.commandComplete(-1, null); + } else { + displayMessage(client, ui, _currentMessage, (int)opts.getOptLong("page", 1)); + displayThread(client, ui, rebuildThread); + ui.commandComplete(0, null); + } + } + + private static void displayMessage(DBClient client, UI ui, MessageInfo message, int page) { + ChannelInfo scopeChan = client.getChannel(message.getScopeChannelId()); + if (scopeChan != null) { + SyndieURI uri = SyndieURI.createMessage(scopeChan.getChannelHash(), message.getMessageId()); + ui.statusMessage("URI: " + uri.toString()); + } else { + ui.errorMessage("Unable to find the channel info that the post was scoped under (" + message.getScopeChannelId() + ")"); + } + + if (message.getReplyKeyUnknown()) { + ui.statusMessage("Message is an undecrypted private reply message"); + ui.statusMessage("You cannot read this message unless you have the channel's private reply key"); + ui.statusMessage("If you have the key, decrypt with 'decrypt'"); + // technically, we don't have to return, and can display the public tags/etc + return; + } else if (message.getReadKeyUnknown()) { + ui.statusMessage("Message is an undecrypted post"); + ui.statusMessage("You cannot read this message unless you have the correct channel's read key"); + ui.statusMessage("If you have the key, decrypt with 'decrypt'"); + // technically, we don't have to return, and can display the public tags/etc + return; + } else if (message.getPassphrasePrompt() != null) { + ui.statusMessage("Message is an undecrypted passphrase protected post"); + ui.statusMessage("You cannot read this message unless you know the correct passphrase"); + ui.statusMessage("The passphrase prompt is: " + CommandImpl.strip(message.getPassphrasePrompt())); + ui.statusMessage("To try and decrypt the message, use 'decrypt --passphrase \"the correct passphrase\"'"); + // technically, we don't have to return, and can display the public tags/etc + return; + } + + if (page >= message.getPageCount()) + page = message.getPageCount(); + if (page <= 0) + page = 1; + if (message.getWasPrivate()) + ui.statusMessage("Message was privately encrypted to the channel reply key"); + if (message.getWasAuthenticated()) { + long authorId = message.getAuthorChannelId(); + if (authorId >= 0) { + if (message.getTargetChannelId() == authorId) { + // no need to mention that the channel's author posted in their own channel + ui.debugMessage("targetChannelId == authorChannelId"); + } else { + ChannelInfo info = client.getChannel(authorId); + if (info != null) { + StringBuffer buf = new StringBuffer(); + buf.append("Author: ").append(CommandImpl.strip(info.getName())); + buf.append(" (").append(info.getChannelHash().toBase64().substring(0,6)).append(")"); + ui.statusMessage(buf.toString()); + } + } + } else { + // author was the target channel itself, so no need to mention an Author + } + } else { + ui.statusMessage("Author was not authenticated"); + } + + Hash chan = message.getTargetChannel(); + long chanId = message.getTargetChannelId(); + ChannelInfo targetChannel = client.getChannel(chanId); + if (targetChannel != null) { + StringBuffer buf = new StringBuffer(); + buf.append("Channel: ").append(CommandImpl.strip(targetChannel.getName())); + buf.append(" (").append(targetChannel.getChannelHash().toBase64().substring(0,6)).append(") "); + if (message.getWasAuthorized()) + buf.append("[post was authorized] "); + else + buf.append("[post was NOT authorized] "); + if (message.getWasAuthenticated()) + buf.append("[post was authenticated] "); + else + buf.append("[post was NOT authenticated] "); + ui.statusMessage(buf.toString()); + } else if (chan != null) { + StringBuffer buf = new StringBuffer(); + buf.append("Channel: "); + buf.append(" (").append(chan.toBase64().substring(0,6)).append(") "); + if (message.getWasAuthorized()) + buf.append("[post was authorized] "); + else + buf.append("[post was NOT authorized] "); + if (message.getWasAuthenticated()) + buf.append("[post was authenticated] "); + else + buf.append("[post was NOT authenticated] "); + ui.statusMessage(buf.toString()); + } + + ui.statusMessage("MessageId: " + message.getMessageId()); + + String when = null; + synchronized (_dayFmt) { when = _dayFmt.format(new Date(message.getMessageId())); } + ui.statusMessage("Date: " + when); + + + if (message.getSubject() != null) + ui.statusMessage("Subject: " + CommandImpl.strip(message.getSubject())); + + Set tags = new TreeSet(); + if (message.getPublicTags() != null) + tags.addAll(message.getPublicTags()); + if (message.getPrivateTags() != null) + tags.addAll(message.getPrivateTags()); + if ( (tags != null) && (tags.size() > 0) ) { + StringBuffer buf = new StringBuffer(); + buf.append("Tags: "); + for (Iterator iter = tags.iterator(); iter.hasNext(); ) { + buf.append(CommandImpl.strip(iter.next().toString())).append(" "); + } + ui.statusMessage(buf.toString()); + } + + String content = client.getMessagePageData(message.getInternalId(), page-1); + if (content == null) { + ui.statusMessage("(content not available)"); + } else { + ui.statusMessage("Page: " + page + " of " + message.getPageCount()); + ui.statusMessage("-----------------------------------------------------------------"); + ui.statusMessage(content); + ui.statusMessage("-----------------------------------------------------------------"); + ui.statusMessage("Attachments: " + message.getAttachmentCount()); + } + + List refs = message.getReferences(); + if ( (refs != null) && (refs.size() > 0) ) { + ui.statusMessage("References:"); + ReferenceNode.walk(refs, new RefWalker(ui)); + } + } + + private static class RefWalker implements ReferenceNode.Visitor { + private UI _ui; + private int _nodes; + public RefWalker(UI ui) { _ui = ui; _nodes = 0; } + public void visit(ReferenceNode node, int indent, int siblingOrder) { + SyndieURI uri = node.getURI(); + StringBuffer walked = new StringBuffer(); + + walked.append(node.getTreeIndex()).append(": "); + + boolean wasKey = false; + if (uri.getScope() != null) { + if (uri.getString("readKey") != null) { + walked.append("Read key for " + uri.getScope().toBase64() + " included\n"); + wasKey = true; + } else if (uri.getString("postKey") != null) { + walked.append("Post key for " + uri.getScope().toBase64() + " included\n"); + wasKey = true; + } else if (uri.getString("manageKey") != null) { + walked.append("Manage key for " + uri.getScope().toBase64() + " included\n"); + wasKey = true; + } else if (uri.getString("replyKey") != null) { + walked.append("Reply key for " + uri.getScope().toBase64() + " included\n"); + wasKey = true; + } + } + + if (!wasKey) { + walked.append(CommandImpl.strip(node.getName())); + if (node.getDescription() != null) { + walked.append(" - "); + walked.append(CommandImpl.strip(node.getDescription())); + } + walked.append(" [type: ").append(node.getReferenceType()).append("]\n"); + walked.append("\tURI: ").append(uri.toString()); + } + + _ui.statusMessage(walked.toString()); + _nodes++; + } + } + + /** + * importkey --position $position + * import the key included in the given message reference + */ + private void processImportKey(DBClient client, UI ui, Opts opts) { + String position = opts.getOptValue("position"); + List refs = _currentMessage.getReferences(); + KeyRefWalker walker = new KeyRefWalker(ui, position); + ReferenceNode.walk(refs, walker); + ReferenceNode node = walker.getSelectedNode(); + if ( (node == null) || (node.getURI() == null) ) { + ui.errorMessage("Invalid reference position"); + ui.commandComplete(-1, null); + return; + } + SyndieURI uri = node.getURI(); + Hash scope = uri.getScope(); + ui.debugMessage("Selected reference: " + uri.toString() + " [for " + scope + "]"); + if (scope != null) { + SessionKey readKey = uri.getReadKey(); + if (readKey != null) { + // consider the read key authenticated if it was posted by the owner + // or a manager of the channel it refers to + boolean authenticated = false; + long authorChan = _currentMessage.getAuthorChannelId(); + if (authorChan < 0) + authorChan = _currentMessage.getTargetChannelId(); + long scopeChan = client.getChannelId(scope); + if (authorChan == scopeChan) { + authenticated = true; + } else { + ChannelInfo info = client.getChannel(scopeChan); + Set managers = info.getAuthorizedManagers(); + for (Iterator iter = managers.iterator(); iter.hasNext(); ) { + SigningPublicKey pub = (SigningPublicKey)iter.next(); + long mgrChannel = client.getChannelId(pub.calculateHash()); + if (mgrChannel == authorChan) { + authenticated = true; + break; + } + } + } + KeyImport.importKey(ui, client, Constants.KEY_FUNCTION_READ, scope, readKey.getData(), authenticated); + ui.statusMessage("Read key for channel " + scope.toBase64() + " imported (authentic? " + authenticated + ")"); + ui.commandComplete(0, null); + return; + } + + SigningPrivateKey postKey = uri.getPostKey(); + if (postKey != null) { + // consider the post key authentic if it is in the target channel's post or + // manage list + SigningPublicKey pub = client.ctx().keyGenerator().getSigningPublicKey(postKey); + boolean authenticated = false; + if (pub.calculateHash().equals(scope)) + authenticated = true; + if (!authenticated) { + long scopeChan = client.getChannelId(scope); + if (scopeChan < 0) { + ui.debugMessage("Post key is for an unknown channel"); + } else { + ChannelInfo info = client.getChannel(scopeChan); + if (info == null) { + ui.debugMessage("Post key is for an unloadable channel"); + } else { + if (info.getAuthorizedPosters().contains(pub) || + info.getAuthorizedManagers().contains(pub)) + authenticated = true; + } + } + } + + KeyImport.importKey(ui, client, Constants.KEY_FUNCTION_POST, scope, postKey.getData(), authenticated); + ui.statusMessage("Post key for channel " + scope.toBase64() + " imported (authentic? " + authenticated + ")"); + ui.commandComplete(0, null); + return; + } + + SigningPrivateKey manageKey = uri.getManageKey(); + if (manageKey != null) { + // consider the manage key authentic if it is in the target channel's manage list + SigningPublicKey pub = client.ctx().keyGenerator().getSigningPublicKey(manageKey); + boolean authenticated = false; + if (pub.calculateHash().equals(scope)) + authenticated = true; + if (!authenticated) { + long scopeChan = client.getChannelId(scope); + if (scopeChan < 0) { + ui.debugMessage("Manage key is for an unknown channel"); + } else { + ChannelInfo info = client.getChannel(scopeChan); + if (info == null) { + ui.debugMessage("Manage key is for an unloadable channel"); + } else { + if (info.getAuthorizedManagers().contains(pub)) + authenticated = true; + } + } + } + + KeyImport.importKey(ui, client, Constants.KEY_FUNCTION_MANAGE, scope, manageKey.getData(), authenticated); + ui.statusMessage("Manage key for channel " + scope.toBase64() + " imported (authentic? " + authenticated + ")"); + ui.commandComplete(0, null); + return; + } + + PrivateKey replyKey = uri.getReplyKey(); + if (replyKey != null) { + // consider the reply key authentic if it is in the target channel's reply key + PublicKey pub = client.ctx().keyGenerator().getPublicKey(replyKey); + boolean authenticated = false; + long scopeChan = client.getChannelId(scope); + if (scopeChan < 0) { + ui.debugMessage("Reply key is for an unknown channel"); + } else { + ChannelInfo info = client.getChannel(scopeChan); + if (info == null) { + ui.debugMessage("Reply key is for an unloadable channel"); + } else { + if (info.getEncryptKey().equals(pub)) + authenticated = true; + } + } + + KeyImport.importKey(ui, client, Constants.KEY_FUNCTION_REPLY, scope, replyKey.getData(), authenticated); + ui.statusMessage("Reply key for channel " + scope.toBase64() + " imported (authentic? " + authenticated + ")"); + ui.commandComplete(0, null); + return; + } + } + ui.errorMessage("Reference does not have a key"); + ui.commandComplete(-1, null); + } + + private static class KeyRefWalker implements ReferenceNode.Visitor { + private UI _ui; + private String _position; + private ReferenceNode _selected; + public KeyRefWalker(UI ui, String position) { _ui = ui; _position = position; } + public ReferenceNode getSelectedNode() { return _selected; } + public void visit(ReferenceNode node, int indent, int siblingOrder) { + if (_selected != null) return; + if (node.getTreeIndex().equalsIgnoreCase(_position)) + _selected = node; + } + } + + /** + [Thread: + $position: $channel $date $subject $author + $position: $channel $date $subject $author + $position: $channel $date $subject $author + $position: $channel $date $subject $author] + (thread display includes nesting and the current position, + e.g. "1: $hash 2006/08/01 'I did stuff' me" + "1.1: $hash 2006/08/02 'Liar, you did not' you" + "2: $hash 2006/08/03 'No more stuff talk' foo" + "2.1: $hash 2006/08/03 'wah wah wah' you" + "2.1.1: $hash 2006/08/03 'what you said' me" + "* 2.2: $hash 2006/08/03 'message being displayed...' blah" + "2.2.1: $hash 2006/08/04 'you still talking?' moo") + */ + private void displayThread(DBClient client, UI ui, boolean rebuildThread) { + if (rebuildThread) { + MessageThreadBuilder builder = new MessageThreadBuilder(client, ui); + ui.debugMessage("building the thread from " + _currentMessage.getScopeChannel().toBase64().substring(0,6) + ":" + _currentMessage.getMessageId() + + " (internalId: " + _currentMessage.getInternalId() + " channel: " + _currentMessage.getScopeChannelId() + ")"); + _currentThreadRoot = builder.buildThread(_currentMessage); + } else { + ui.debugMessage("Not rebuilding the thread"); + } + if ( (_currentThreadRoot == null) || (_currentThreadRoot.getChildCount() == 0) ) { + // only one message, no need to display a thread + } else { + List roots = new ArrayList(1); + roots.add(_currentThreadRoot); + ThreadWalker walker = new ThreadWalker(ui); + ui.statusMessage("Thread: "); + ReferenceNode.walk(roots, walker); + } + } + + private class ThreadWalker implements ReferenceNode.Visitor { + private UI _ui; + private int _nodes; + public ThreadWalker(UI ui) { _ui = ui; _nodes = 0; } + public void visit(ReferenceNode node, int indent, int siblingOrder) { + SyndieURI uri = node.getURI(); + if (uri == null) return; + Hash channel = uri.getScope(); + Long msgId = uri.getMessageId(); + if ( (channel == null) || (msgId == null) ) return; + //_ui.debugMessage("Walking node " + _nodes + " - " + channel.toBase64() + ":" + msgId.longValue() + " [" + node.getTreeIndex() + "]"); + //if (node.getParent() == null) + // _ui.debugMessage("parent: none"); + //else + // _ui.debugMessage("parent: " + node.getParent().getURI()); + //_ui.debugMessage("Child count: " + node.getChildCount()); + + StringBuffer walked = new StringBuffer(); + + if ( (_currentMessage.getScopeChannel().equals(channel)) && (msgId.longValue() == _currentMessage.getMessageId()) ) + walked.append("* "); + + walked.append(node.getTreeIndex()).append(": "); + if (node.getName() == null) { + // dummy element in the tree, representing a message we don't have locally + walked.append("[message not locally known]"); + walked.append(" (").append(channel.toBase64().substring(0,6)).append(":").append(msgId).append(")"); + } else { + walked.append(CommandImpl.strip(node.getName())); + walked.append(" (").append(channel.toBase64().substring(0,6)).append(") "); + String when = null; + synchronized (_dayFmt) { + when = _dayFmt.format(new Date(msgId.longValue())); + } + walked.append(when).append(" "); + walked.append(CommandImpl.strip(node.getDescription())); + } + _ui.statusMessage(walked.toString()); + _nodes++; + } + } + + /** + * threadnext [--position $position] + * view the next message in the thread (or the given thread position) + */ + private void processThreadNext(DBClient client, UI ui, Opts opts) { + if ( (_currentThreadRoot == null) || (_currentThreadRoot.getChildCount() == 0) ) { + // only one message, there is no next + ui.statusMessage("No remaining messages in the thread"); + ui.commandComplete(-1, null); + } else { + String position = opts.getOptValue("position"); + List roots = new ArrayList(1); + roots.add(_currentThreadRoot); + NextThreadWalker walker = new NextThreadWalker(ui, position); + ReferenceNode.walk(roots, walker); + SyndieURI uri = walker.getNextURI(); + if (uri != null) { + Opts viewOpts = new Opts(); + viewOpts.setCommand("view"); + viewOpts.setOptValue("message", uri.toString()); + viewOpts.setOptValue("rebuildThread", "false"); + processView(client, ui, viewOpts); + } else { + ui.statusMessage("No remaining messages in the thread"); + ui.commandComplete(-1, null); + } + } + } + + private class NextThreadWalker implements ReferenceNode.Visitor { + private UI _ui; + private String _wanted; + private int _nodes; + private SyndieURI _nextURI; + private SyndieURI _prevURI; + private boolean _prevWasCurrent; + public NextThreadWalker(UI ui, String wanted) { _ui = ui; _nodes = 0; _wanted = wanted; } + public SyndieURI getNextURI() { return _nextURI; } + public SyndieURI getPrevURI() { return _prevURI; } + public void visit(ReferenceNode node, int indent, int siblingOrder) { + SyndieURI uri = node.getURI(); + if (uri == null) return; + Hash channel = uri.getScope(); + Long msgId = uri.getMessageId(); + if ( (channel == null) || (msgId == null) ) return; + + _ui.debugMessage("Visiting " + node.getTreeIndex() + ": " + channel.toBase64().substring(0,6) + ":" + msgId); + if (_nextURI != null) return; // done + if (_prevWasCurrent) { + _prevWasCurrent = false; + if (_wanted == null) { // pick next available + _nextURI = node.getURI(); + _ui.debugMessage("no position specified and the previous was current. setting next=" + node.getTreeIndex()); + return; + } + } + + if ( (_currentMessage.getScopeChannel().equals(channel)) && (msgId.longValue() == _currentMessage.getMessageId()) ) { + _prevWasCurrent = true; + _ui.debugMessage("current message is being viewed (" + node.getTreeIndex() + ")"); + } else { + _prevURI = uri; + _ui.debugMessage("current message is not being viewed, updating prevURI to " + node.getTreeIndex()); + } + + if ( (_wanted != null) && (_wanted.equalsIgnoreCase(node.getTreeIndex())) ) { + if (node.getName() == null) { + // dummy element in the tree, representing a message we don't have locally + _ui.errorMessage("Requested thread message is not known locally: " + node.getURI().toString()); + } else { + _nextURI = uri; + _prevURI = uri; + _ui.debugMessage("explicit position is matched (treeIndex of " + node.getTreeIndex() + ")"); + } + } + _nodes++; + } + } + /** + * threadprev [--position $position] + * view the previous message in the thread (or the given thread position) + */ + private void processThreadPrev(DBClient client, UI ui, Opts opts) { + if ( (_currentThreadRoot == null) || (_currentThreadRoot.getChildCount() == 0) ) { + // only one message, there is no previous + ui.statusMessage("No earlier messages in the thread"); + ui.commandComplete(-1, null); + } else { + String position = opts.getOptValue("position"); + List roots = new ArrayList(1); + roots.add(_currentThreadRoot); + NextThreadWalker walker = new NextThreadWalker(ui, position); + ReferenceNode.walk(roots, walker); + SyndieURI uri = walker.getPrevURI(); + if (uri != null) { + Opts viewOpts = new Opts(); + viewOpts.setCommand("view"); + viewOpts.setOptValue("message", uri.toString()); + viewOpts.setOptValue("rebuildThread", "false"); + processView(client, ui, viewOpts); + } else { + ui.statusMessage("No earlier messages in the thread"); + ui.commandComplete(-1, null); + } + } + } + + /** export [--message ($index|$uri)] --out $directory */ + private void processExport(DBClient client, UI ui, Opts opts) { + String msg = opts.getOptValue("message"); + if (msg != null) { + try { + int index = Integer.parseInt(msg); + if ( (index >= 0) && (index < _messageKeys.size()) ) { + _currentMessage = client.getMessage(((Long)_messageKeys.get(index)).longValue()); + _currentThreadRoot = null; + } else { + ui.errorMessage("Message index is out of range (highest value is " + _messageKeys.size() + ")"); + ui.commandComplete(-1, null); + return; + } + } catch (NumberFormatException nfe) { + // try it as a full URI + try { + SyndieURI uri = new SyndieURI(msg); + long chanId = client.getChannelId(uri.getScope()); + if (chanId >= 0) { + _currentChannel = client.getChannel(chanId); + _currentMessage = client.getMessage(chanId, uri.getMessageId()); + _currentThreadRoot = null; + if (_currentMessage != null) { + // ok, switched over + } else { + ui.statusMessage("Switched over to the specified channel, but the requested message was not known (" + uri.getMessageId() + ")"); + ui.commandComplete(0, null); + return; + } + } else { + ui.statusMessage("The message requested is not in a locally known channel (" + uri.getScope() + ")"); + ui.commandComplete(0, null); + return; + } + } catch (URISyntaxException use) { + ui.errorMessage("The requested message is neither an index to the message list or a full syndie URI"); + ui.commandComplete(-1, null); + return; + } + } + } + + if (_currentMessage == null) { + ui.errorMessage("No implicit message known, please specify one with --message $index or --message $syndieURI"); + ui.commandComplete(-1, null); + return; + } + + CLI.Command cmd = CLI.getCommand("viewmessage"); + if (cmd == null) { + ui.errorMessage("Internal error extracting the message"); + ui.commandComplete(-1, null); + return; + } + + String out = opts.getOptValue("out"); + if (out == null) { + ui.errorMessage("You must specify where the message should be extracted to with --out $outDir"); + ui.commandComplete(-1, null); + return; + } + + NestedUI nestedUI = new NestedUI(ui); + Opts viewOpts = new Opts(); + viewOpts.setCommand("viewmessage"); + viewOpts.setOptValue("internalid", Long.toString(_currentMessage.getInternalId())); + viewOpts.setOptValue("out", out); + cmd.runCommand(viewOpts, nestedUI, client); + ui.commandComplete(nestedUI.getExitCode(), null); + } + + /** save [--message ($index|$uri)] (--page $n|--attachment $n) --out $filename */ + private void processSave(DBClient client, UI ui, Opts opts) { + String msg = opts.getOptValue("message"); + if (msg != null) { + try { + int index = Integer.parseInt(msg); + if ( (index >= 0) && (index < _messageKeys.size()) ) { + _currentMessage = client.getMessage(((Long)_messageKeys.get(index)).longValue()); + _currentThreadRoot = null; + } else { + ui.errorMessage("Message index is out of range (highest value is " + _messageKeys.size() + ")"); + ui.commandComplete(-1, null); + return; + } + } catch (NumberFormatException nfe) { + // try it as a full URI + try { + SyndieURI uri = new SyndieURI(msg); + long chanId = client.getChannelId(uri.getScope()); + if (chanId >= 0) { + _currentChannel = client.getChannel(chanId); + _currentMessage = client.getMessage(chanId, uri.getMessageId()); + _currentThreadRoot = null; + if (_currentMessage != null) { + // ok, switched over + } else { + ui.statusMessage("Switched over to the specified channel, but the requested message was not known (" + uri.getMessageId() + ")"); + ui.commandComplete(0, null); + return; + } + } else { + ui.statusMessage("The message requested is not in a locally known channel (" + uri.getScope() + ")"); + ui.commandComplete(0, null); + return; + } + } catch (URISyntaxException use) { + ui.errorMessage("The requested message is neither an index to the message list or a full syndie URI"); + ui.commandComplete(-1, null); + return; + } + } + } + + if (_currentMessage == null) { + ui.errorMessage("No implicit message known, please specify one with --message $index or --message $syndieURI"); + ui.commandComplete(-1, null); + return; + } + + int page = (int)opts.getOptLong("page", -1); + int attach = (int)opts.getOptLong("attachment", -1); + if ( (page < 0) && (attach < 0) ) { + ui.errorMessage("Please specify a page or attachment to save with --page $num or --attachment $num"); + ui.commandComplete(-1, null); + return; + } + if ( (page >= 0) && (page >= _currentMessage.getPageCount()) ) { + ui.errorMessage("Page is out of range (number of pages: " + _currentMessage.getPageCount() + ")"); + ui.commandComplete(-1, null); + return; + } + if ( (attach >= 0) && (attach >= _currentMessage.getAttachmentCount()) ) { + ui.errorMessage("Attachment is out of range (number of attachments: " + _currentMessage.getAttachmentCount() + ")"); + ui.commandComplete(-1, null); + return; + } + + String filename = opts.getOptValue("out"); + if (filename == null) { + ui.errorMessage("Please specify a file to save the content as with --out $filename"); + ui.commandComplete(-1, null); + return; + } + + FileOutputStream fos = null; + try { + fos = new FileOutputStream(filename); + if (page >= 0) { + String data = client.getMessagePageData(_currentMessage.getInternalId(), page); + fos.write(DataHelper.getUTF8(data)); + } else { + fos.write(client.getMessageAttachmentData(_currentMessage.getInternalId(), attach)); + } + fos.close(); + fos = null; + ui.statusMessage("Content written to " + filename); + ui.commandComplete(0, null); + } catch (IOException ioe) { + ui.errorMessage("Error writing the content to " + filename, ioe); + ui.commandComplete(-1, null); + } finally { + if (fos != null) try { fos.close(); } catch (IOException ioe) {} + } + } + + /** reply */ + private void processReply(DBClient client, UI ui, Opts opts) { + if (_currentMessage == null) { + ui.errorMessage("Cannot reply - there is no current message"); + ui.commandComplete(-1, null); + return; + } + Hash target = _currentMessage.getTargetChannel(); + ui.insertCommand("menu post"); + ui.insertCommand("create --channel " + target.toBase64()); + ui.insertCommand("addparent --uri " + _currentMessage.getURI().toString()); + for (int i = 0; i < _currentMessage.getHierarchy().size() && i < 5; i++) { + SyndieURI uri = (SyndieURI)_currentMessage.getHierarchy().get(i); + ui.insertCommand("addParent --uri " + uri.toString()); + } + } + + /** + * ban [--scope (author|channel|$hash)] [--delete $boolean] + * ban the author or channel so that no more posts from that author + * or messages by any author in that channel will be allowed into the + * Syndie archive. If --delete is specified, the messages themselves + * will be removed from the archive as well as the database + */ + private void processBan(DBClient client, UI ui, Opts opts) { + String scope = opts.getOptValue("scope"); + Hash bannedChannel = null; + if (scope == null) { + if (_currentMessage != null) { + // if the scope is not specified and we are viewing a message, + // ban the author ofthe message (or the channel it is in if no author is specified) + bannedChannel = getScopeToBan(client, _currentMessage, true); + } else { + // if the scope is not specified and we are not viewing a message, + // ban the channel we are in (if any) + if (_currentChannel != null) { + bannedChannel = _currentChannel.getChannelHash(); + } + } + } else { + // scope is specified + if ("author".equalsIgnoreCase(scope)) { + bannedChannel = getScopeToBan(client, _currentMessage, true); + } else if ("channel".equalsIgnoreCase(scope)) { + bannedChannel = getScopeToBan(client, _currentMessage, false); + if (bannedChannel == null) + bannedChannel = _currentChannel.getChannelHash(); + } else { + byte scopeBytes[] = Base64.decode(scope); + if ( (scopeBytes != null) && (scopeBytes.length == Hash.HASH_LENGTH) ) + bannedChannel = new Hash(scopeBytes); + } + } + + if (bannedChannel != null) { + boolean delete = opts.getOptBoolean("delete", true); + client.ban(bannedChannel, ui, delete); + ui.statusMessage("Scope banned: " + bannedChannel.toBase64() + " (all posts/metadata deleted? " + delete + ")"); + ui.commandComplete(0, null); + } else { + ui.errorMessage("Usage: ban [--scope (author|channel|$hash)] [--delete $boolean]"); + ui.commandComplete(-1, null); + } + } + private Hash getScopeToBan(DBClient client, MessageInfo message, boolean banAuthor) { + if (message == null) return null; + Hash bannedChannel = null; + if (banAuthor) { + long authorId = message.getAuthorChannelId(); + if (authorId >= 0) { + ChannelInfo author = client.getChannel(authorId); + if (author != null) { + bannedChannel = author.getChannelHash(); + } + } + if (bannedChannel == null) { + long scopeId = message.getScopeChannelId(); + if (scopeId >= 0) { + ChannelInfo scopeChan = client.getChannel(scopeId); + if (scopeChan != null) { + bannedChannel = scopeChan.getChannelHash(); + } + } + } + } + if (bannedChannel == null) + bannedChannel = message.getTargetChannel(); + return bannedChannel; + } + + /** + * decrypt [(--message $msgId|--channel $channelId)] [--passphrase pass] + */ + private void processDecrypt(DBClient client, UI ui, Opts opts) { + int messageIndex = (int)opts.getOptLong("message", -1); + int channelIndex = (int)opts.getOptLong("channel", -1); + String passphrase = opts.getOptValue("passphrase"); + + File archivedFile = null; + File archiveDir = client.getArchiveDir(); + if (messageIndex >= 0) { + if (messageIndex < _messageKeys.size()) { + Long msgId = (Long)_messageKeys.get(messageIndex); + MessageInfo msg = client.getMessage(msgId.longValue()); + if (msg != null) { + Hash scope = msg.getScopeChannel(); + File channelDir = new File(archiveDir, scope.toBase64()); + archivedFile = new File(channelDir, msg.getMessageId() + Constants.FILENAME_SUFFIX); + } else { + ui.errorMessage("The message specified could not be found"); + ui.commandComplete(-1, null); + return; + } + } else { + ui.errorMessage("The message index is out of bounds"); + ui.commandComplete(-1, null); + return; + } + } else if (channelIndex >= 0) { + if (channelIndex < _channelKeys.size()) { + Long channelId = (Long)_channelKeys.get(channelIndex); + ChannelInfo chan = client.getChannel(channelId.longValue()); + if (chan != null) { + File channelDir = new File(archiveDir, chan.getChannelHash().toBase64()); + archivedFile = new File(channelDir, "meta" + Constants.FILENAME_SUFFIX); + } else { + ui.errorMessage("The channel metadata specified could not be found"); + ui.commandComplete(-1, null); + return; + } + } else { + ui.errorMessage("The channel index is out of bounds"); + ui.commandComplete(-1, null); + return; + } + } else { + if (_currentMessage != null) { + Hash scope = _currentMessage.getScopeChannel(); + File channelDir = new File(archiveDir, scope.toBase64()); + archivedFile = new File(channelDir, _currentMessage.getMessageId() + Constants.FILENAME_SUFFIX); + } else if (_currentChannel != null) { + File channelDir = new File(archiveDir, _currentChannel.getChannelHash().toBase64()); + archivedFile = new File(channelDir, "meta" + Constants.FILENAME_SUFFIX); + } else { + ui.errorMessage("No channel or message specified to decrypt"); + ui.commandComplete(-1, null); + return; + } + } + + if ( (archivedFile != null) && (!archivedFile.exists()) ) { + ui.errorMessage("The decryption could not be completed, because the signed archive file"); + ui.errorMessage("was not retained"); + ui.commandComplete(-1, null); + return; + } + + Importer imp = new Importer(client, client.getPass()); + NestedUI nestedUI = new NestedUI(ui); + try { + ui.debugMessage("Importing from " + archivedFile.getPath()); + boolean ok = imp.processMessage(nestedUI, new FileInputStream(archivedFile), client.getLoggedInNymId(), client.getPass(), passphrase); + if (ok) { + if (nestedUI.getExitCode() == 0) { + ui.statusMessage("Decrypted successfully"); + ui.commandComplete(0, null); + } else { + ui.errorMessage("Decryption failed"); + ui.commandComplete(nestedUI.getExitCode(), null); + } + } else { + ui.errorMessage("Decryption and import failed"); + ui.commandComplete(-1, null); + } + } catch (IOException ioe) { + ui.errorMessage("Decryption failed"); + ui.commandComplete(-1, null); + } + } +} diff --git a/src/syndie/db/SyndicateMenu.java b/src/syndie/db/SyndicateMenu.java new file mode 100644 index 0000000..87ed18c --- /dev/null +++ b/src/syndie/db/SyndicateMenu.java @@ -0,0 +1,557 @@ +package syndie.db; + +import java.io.File; +import java.io.FileInputStream; +import java.io.FileOutputStream; +import java.io.FilenameFilter; +import java.io.IOException; +import java.io.OutputStream; +import java.text.SimpleDateFormat; +import java.util.ArrayList; +import java.util.Date; +import java.util.List; +import java.util.Locale; +import net.i2p.data.*; +import net.i2p.util.EepGet; +import syndie.Constants; +import syndie.data.SyndieURI; + +/** + * + */ +class SyndicateMenu implements TextEngine.Menu { + private TextEngine _engine; + private ArchiveIndex _currentIndex; + private ArchiveDiff _diff; + private HTTPSyndicator _syndicator; + private String _baseUrl; + private String _proxyHost; + private int _proxyPort; + private boolean _shouldProxy; + private boolean _archiveWasRemote; + private int _curPBEIndex; + + public SyndicateMenu(TextEngine engine) { + _engine = engine; + } + + public static final String NAME = "syndicate"; + public String getName() { return NAME; } + public String getDescription() { return "syndication menu"; } + public boolean requireLoggedIn() { return true; } + public void listCommands(UI ui) { + ui.statusMessage(" buildindex : create or update the current archive's index"); + ui.statusMessage(" getindex --archive $url [--proxyHost $host --proxyPort $port] [--pass $pass]"); + ui.statusMessage(" [--scope (all|new|meta|unauth)]"); + ui.statusMessage(" : fetch the appropriate index from the archive"); + ui.statusMessage(" diff [--maxSize $numBytes]"); + ui.statusMessage(" : summarize the differences between the fetched index and the local db"); + ui.statusMessage(" fetch [--style (diff|known|metaonly|pir|unauth)] [--includeReplies $boolean] [--maxSize $numBytes]"); + ui.statusMessage(" : actually fetch the posts/replies/metadata"); + ui.statusMessage(" nextpbe [--lines $num]"); + ui.statusMessage(" prevpbe [--lines $num]"); + ui.statusMessage(" : paginate through the messages using passphrase based encryption"); + ui.statusMessage(" resolvepbe --index $num --passphrase $passphrase"); + ui.statusMessage(" : import the indexed message by using the specified passphrase"); + ui.statusMessage(" schedule --put (outbound|outboundmeta|archive|archivemeta) [--deleteOutbound $boolean] [--knownChanOnly $boolean]"); + ui.statusMessage(" : schedule a set of messages to be posted"); + ui.statusMessage(" put : send up the scheduled posts/replies/metadata to the archive"); + ui.statusMessage(" bulkimport --dir $directory --delete $boolean"); + ui.statusMessage(" : import all of the " + Constants.FILENAME_SUFFIX + " files in the given directory, deleting them on completion"); + ui.statusMessage(" listban : list the channels currently banned in the local archive"); + ui.statusMessage(" unban [--scope $index|$chanHash]"); + } + public boolean processCommands(DBClient client, UI ui, Opts opts) { + String cmd = opts.getCommand(); + if ("buildindex".equalsIgnoreCase(cmd)) { + processBuildIndex(client, ui, opts); + } else if ("getindex".equalsIgnoreCase(cmd)) { + processGetIndex(client, ui, opts); + } else if ("diff".equalsIgnoreCase(cmd)) { + processDiff(client, ui, opts); + } else if ("fetch".equalsIgnoreCase(cmd)) { + processFetch(client, ui, opts); + } else if ("nextpbe".equalsIgnoreCase(cmd)) { + processNextPBE(client, ui, opts); + } else if ("prevpbe".equalsIgnoreCase(cmd)) { + processPrevPBE(client, ui, opts); + } else if ("resolvepbe".equalsIgnoreCase(cmd)) { + processResolvePBE(client, ui, opts); + } else if ("schedule".equalsIgnoreCase(cmd)) { + processSchedule(client, ui, opts); + } else if ("put".equalsIgnoreCase(cmd)) { + processPut(client, ui, opts); + } else if ("bulkimport".equalsIgnoreCase(cmd)) { + processBulkImport(client, ui, opts); + } else if ("listban".equalsIgnoreCase(cmd)) { + processListBan(client, ui, opts); + } else if ("unban".equalsIgnoreCase(cmd)) { + processUnban(client, ui, opts); + } else { + return false; + } + return true; + } + public List getMenuLocation(DBClient client, UI ui) { + List rv = new ArrayList(); + rv.add("syndicate"); + return rv; + } + + /** + * getindex --archive $url [--proxyHost $host --proxyPort $port] [--pass $pass] + * [--scope (all|new|meta)] + */ + private void processGetIndex(DBClient client, UI ui, Opts opts) { + _diff = null; + _syndicator = null; // delete files? + _baseUrl = opts.getOptValue("archive"); + if (_baseUrl == null) + _baseUrl = client.getDefaultHTTPArchive(); + if (_baseUrl == null) { + ui.errorMessage("The archive url is required. Usage: "); + ui.errorMessage("getindex --archive $url [--proxyHost $host --proxyPort $port] [--pass $pass] [--scope (all|new|meta|unauth)] [--channel $chan]"); + ui.commandComplete(-1, null); + return; + } + _proxyHost = opts.getOptValue("proxyHost"); + _proxyPort = (int)opts.getOptLong("proxyPort", -1); + if ( ( (_proxyHost == null) || (_proxyPort <= 0) ) && + ( (client.getDefaultHTTPProxyHost() != null) && (client.getDefaultHTTPProxyPort() > 0) ) ) { + _proxyHost = client.getDefaultHTTPProxyHost(); + _proxyPort = client.getDefaultHTTPProxyPort(); + } + boolean unauth = false; + String scope = opts.getOptValue("scope"); + String url = null; + if (scope == null) + scope = "all"; + if (!_baseUrl.endsWith("/")) + _baseUrl = _baseUrl + "/"; + if ("new".equalsIgnoreCase(scope)) { + url = _baseUrl + "index-new.dat"; + } else if ("meta".equalsIgnoreCase(scope)) { + url = _baseUrl + "index-meta.dat"; + } else if ("unauth".equalsIgnoreCase(scope)) { + unauth = true; + String chan = opts.getOptValue("channel"); + if (chan != null) { + url = _baseUrl + chan + "/index-unauthorized.dat"; + } else { + url = _baseUrl + "index-unauthorized.dat"; + } + } else { //if ("all".equalsIgnoreCase(scope)) + url = _baseUrl + "index-all.dat"; + } + _shouldProxy = (_proxyHost != null) && (_proxyPort > 0); + _archiveWasRemote = true; + File out = null; + if (_baseUrl.startsWith("/")) { + out = new File(url); + _archiveWasRemote = false; + } else if (_baseUrl.startsWith("file://")) { + out = new File(_baseUrl.substring("file://".length())); + _archiveWasRemote = false; + } else { + try { + out = File.createTempFile("syndicate", ".index", client.getTempDir()); + EepGet get = new EepGet(client.ctx(), _shouldProxy, _proxyHost, (int)_proxyPort, 0, out.getPath(), url, false, null, null); + get.addStatusListener(new UIStatusListener(ui)); + boolean fetched = get.fetch(); + if (!fetched) { + ui.errorMessage("Fetch failed of " + url); + ui.commandComplete(-1, null); + return; + } + ui.statusMessage("Fetch complete"); + } catch (IOException ioe) { + ui.errorMessage("Error pulling the index", ioe); + ui.commandComplete(-1, null); + } + } + try { + ArchiveIndex index = ArchiveIndex.loadIndex(out, ui, unauth); + if (index != null) { + ui.statusMessage("Fetched archive loaded with " + index.getChannelCount() + " channels"); + _currentIndex = index; + _syndicator = new HTTPSyndicator(_baseUrl, _proxyHost, _proxyPort, client, ui, _currentIndex); + processDiff(client, ui, opts); + } else { + ui.errorMessage("Unable to load the fetched archive"); + } + ui.commandComplete(0, null); + } catch (IOException ioe) { + ui.errorMessage("Error loading the index", ioe); + ui.commandComplete(-1, null); + } + if (_archiveWasRemote && out != null) + out.delete(); + } + + private class UIStatusListener implements EepGet.StatusListener { + private UI _ui; + public UIStatusListener(UI ui) { _ui = ui; } + public void bytesTransferred(long alreadyTransferred, int currentWrite, long bytesTransferred, long bytesRemaining, String url) { + _ui.debugMessage("Transferred: " + bytesTransferred); + } + public void transferComplete(long alreadyTransferred, long bytesTransferred, long bytesRemaining, String url, String outputFile, boolean notModified) { + _ui.debugMessage("Transfer complete: " + bytesTransferred); + } + public void attemptFailed(String url, long bytesTransferred, long bytesRemaining, int currentAttempt, int numRetries, Exception cause) { + _ui.debugMessage("Transfer attempt failed: " + bytesTransferred, cause); + } + public void transferFailed(String url, long bytesTransferred, long bytesRemaining, int currentAttempt) { + _ui.statusMessage("Transfer totally failed of " + url); + } + public void headerReceived(String url, int currentAttempt, String key, String val) { + _ui.debugMessage("Header received: " + key + "=" + val); + } + public void attempting(String url) { + _ui.statusMessage("Fetching " + url + "..."); + } + } + + private void processDiff(DBClient client, UI ui, Opts opts) { + if (_currentIndex == null) { + ui.errorMessage("No index loaded"); + ui.commandComplete(-1, null); + return; + } + long maxSize = opts.getOptLong("maxSize", ArchiveIndex.DEFAULT_MAX_SIZE); + if ( (_diff == null) || (maxSize != _diff.maxSizeUsed) ) { + _diff = _currentIndex.diff(client, ui, opts); + } + StringBuffer buf = new StringBuffer(); + if (_diff != null) { + if (_diff.fetchNewUnauthorizedBytes > 0) { + buf.append("Unauthorized posts the remote archive has that we do not:\n"); + buf.append("- ").append(_diff.fetchNewUnauthorizedMetadata.size()).append(" new channels\n"); + buf.append("- ").append(_diff.fetchNewUnauthorizedPosts.size()).append(" new posts\n"); + buf.append("- ").append(_diff.fetchNewUnauthorizedReplies.size()).append(" new replies\n"); + buf.append("To fetch all new unauthorized data, syndie would download:\n"); + buf.append("- ").append((_diff.fetchNewUnauthorizedBytes+1023)/1024).append(" kilobytes\n"); + } else { + buf.append("Things the remote archive has that we do not:\n"); + + buf.append("- ").append(_diff.totalNewChannels).append(" new channels including "); + buf.append(_diff.totalNewMessages).append(" new messages\n"); + + buf.append("- ").append(_diff.totalNewMessagesOnKnownChannels).append(" new messages on "); + buf.append(_diff.totalKnownChannelsWithNewMessages).append(" channels we already know\n"); + + buf.append("- ").append(_diff.totalUpdatedChannels).append(" updated channels\n"); + + buf.append("To fetch all new posts and metadata, syndie would download:\n"); + buf.append("- ").append((_diff.fetchNewBytes+1023)/1024).append(" kilobytes in "); + buf.append(_diff.fetchNewMetadata.size()).append(" metadata messages, "); + buf.append(_diff.fetchNewPosts.size()).append(" posts, and "); + buf.append(_diff.fetchNewReplies.size()).append(" private replies\n"); + + buf.append("To fetch all new posts and metadata for locally known channels, syndie would download:\n"); + buf.append("- ").append((_diff.fetchKnownBytes+1023)/1024).append(" kilobytes in "); + buf.append(_diff.fetchKnownMetadata.size()).append(" metadata messages, "); + buf.append(_diff.fetchKnownPosts.size()).append(" posts, and "); + buf.append(_diff.fetchKnownReplies.size()).append(" private replies\n"); + + buf.append("To fetch only the updated metadata, syndie would download:\n"); + buf.append("- ").append((_diff.fetchMetaBytes+1023)/1024).append(" kilobytes in "); + buf.append(_diff.fetchMetaMessages.size()).append(" metadata messages\n"); + + buf.append("To avoid certain types of profiling, syndie would download:\n"); + buf.append("- ").append((_diff.fetchPIRBytes+1023)/1024).append(" kilobytes in "); + buf.append(_diff.fetchPIRMetadata.size()).append(" metadata messages, "); + buf.append(_diff.fetchPIRPosts.size()).append(" posts, and "); + buf.append(_diff.fetchPIRReplies.size()).append(" private replies\n"); + } + } + ui.statusMessage(buf.toString()); + ui.commandComplete(0, null); + } + + private void processFetch(DBClient client, UI ui, Opts opts) { + if (_diff == null) { + ui.errorMessage("No archive fetched"); + ui.commandComplete(-1, null); + return; + } + + boolean includeReplies = opts.getOptBoolean("includeReplies", true); + String style = opts.getOptValue("style"); + if (style == null) + style = "diff"; + List uris = null; + if ("known".equalsIgnoreCase(style)) + uris = _diff.getFetchKnownURIs(includeReplies); + else if ("metaonly".equalsIgnoreCase(style)) + uris = _diff.getFetchMetaURIs(); + else if ("pir".equalsIgnoreCase(style)) + uris = _diff.getFetchPIRURIs(); + else if ("unauth".equalsIgnoreCase(style)) + uris = _diff.getFetchNewUnauthorizedURIs(includeReplies); + else // "diff" as the default + uris = _diff.getFetchNewURIs(includeReplies); + + ui.debugMessage("Fetching " + uris.size() + " entries: " + uris); + + boolean ok = _syndicator.fetch(uris); + if (ok) { + ui.debugMessage("Messages fetched. Importing..."); + int imported = _syndicator.importFetched(); + int missing = _syndicator.countMissingPassphrases(); + if (missing > 0) { + ui.statusMessage("Some messages could not be imported as they require a passphrase to read."); + ui.statusMessage("To import these " + missing + " messages, please review them with"); + ui.statusMessage("the 'nextpbe' command and import them with the 'resolvepbe' command"); + } + ui.commandComplete(0, null); + } else { + ui.statusMessage("Fetch failed"); + ui.commandComplete(-1, null); + } + } + + private void processNextPBE(DBClient client, UI ui, Opts opts) { + if (_syndicator == null) { + ui.errorMessage("No syndication in progress"); + ui.commandComplete(0, null); + return; + } + int total = _syndicator.countMissingPassphrases(); + int pass = 10; + if (_curPBEIndex + pass > total) + pass = total - _curPBEIndex; + for (int i = 0; i < pass; i++) { + String prompt = _syndicator.getMissingPrompt(_curPBEIndex+i); + SyndieURI uri = _syndicator.getMissingURI(_curPBEIndex+i); + if (uri.getMessageId() == null) + ui.statusMessage((i + _curPBEIndex) + ": Metadata for " + uri.getScope().toBase64() + " requires: "); + else + ui.statusMessage((i + _curPBEIndex) + ": Message " + uri.getMessageId().longValue() + " in " + uri.getScope().toBase64() + " requires: "); + ui.statusMessage("\t" + CommandImpl.strip(prompt)); + } + ui.commandComplete(0, null); + } + private void processPrevPBE(DBClient client, UI ui, Opts opts) { + _curPBEIndex -= 10; + if (_curPBEIndex < 0) + _curPBEIndex = 0; + processNextPBE(client, ui, opts); + } + private void processResolvePBE(DBClient client, UI ui, Opts opts) { + int index = (int)opts.getOptLong("index", 0); + String pass = opts.getOptValue("passphrase"); + _syndicator.importPBE(index, pass); + } + + private void processSchedule(DBClient client, UI ui, Opts opts) { + String style = opts.getOptValue("put"); + if (style == null) { + ui.errorMessage("Usage: schedule --put (outbound|outboundmeta|archive|archivemeta) [--deleteOutbound $boolean]"); + ui.commandComplete(-1, null); + return; + } else if (_syndicator == null) { + ui.errorMessage("An archive's index must be fetched before scheduling updates"); + ui.commandComplete(-1, null); + return; + } + boolean deleteOutbound = opts.getOptBoolean("deleteOutbound", true); + boolean knownChanOnly = opts.getOptBoolean("knownChanOnly", false); + _syndicator.setDeleteOutboundAfterSend(deleteOutbound); + _syndicator.schedulePut(style, knownChanOnly); + ui.statusMessage("Posting scheduled"); + ui.commandComplete(0, null); + } + + private void processPut(DBClient client, UI ui, Opts opts) { + String url = opts.getOptValue("postURL"); + if (url != null) + _syndicator.setPostURLOverride(url); + String pass = opts.getOptValue("passphrase"); + if (pass != null) + _syndicator.setPostPassphrase(pass); + _syndicator.post(); + _syndicator = null; + _diff = null; + } + + /** bulkimport --dir $directory --delete $boolean */ + private void processBulkImport(DBClient client, UI ui, Opts opts) { + String dir = opts.getOptValue("dir"); + boolean del = opts.getOptBoolean("delete", true); + + if (dir == null) { + ui.errorMessage("Usage: bulkimport --dir $directory --delete $boolean"); + ui.commandComplete(-1, null); + return; + } + + int metaImported = 0; + int postImported = 0; + + File f = new File(dir); + File files[] = f.listFiles(_metafilter); + for (int i = 0; i < files.length; i++) { + importMsg(client, ui, files[i]); + if (del) { + boolean deleted = files[i].delete(); + if (!deleted) + ui.statusMessage("Unable to delete " + files[i].getPath()); + else + ui.statusMessage("Metadata deleted from " + files[i].getPath()); + } + metaImported++; + } + + files = f.listFiles(_postfilter); + for (int i = 0; i < files.length; i++) { + importMsg(client, ui, files[i]); + if (del) { + boolean deleted = files[i].delete(); + if (!deleted) + ui.statusMessage("Unable to delete " + files[i].getPath()); + else + ui.statusMessage("Post deleted from " + files[i].getPath()); + } + postImported++; + } + + ui.statusMessage("Imported " + metaImported + " metadata and " + postImported + " posts"); + ui.commandComplete(0, null); + } + + private void importMsg(DBClient client, UI ui, File f) { + Importer imp = new Importer(client, client.getPass()); + ui.debugMessage("Importing from " + f.getPath()); + boolean ok; + try { + NestedUI nested = new NestedUI(ui); + ok = imp.processMessage(nested, new FileInputStream(f), client.getLoggedInNymId(), client.getPass(), null); + if (ok && (nested.getExitCode() >= 0) ) { + if (nested.getExitCode() == 1) { + ui.errorMessage("Imported but could not decrypt " + f.getPath()); + } else { + ui.debugMessage("Import successful for " + f.getPath()); + } + } else { + ui.debugMessage("Could not import " + f.getPath()); + } + } catch (IOException ioe) { + ui.errorMessage("Error importing the message from " + f.getPath(), ioe); + } + } + + private static MetaFilter _metafilter = new MetaFilter(); + private static class MetaFilter implements FilenameFilter { + public boolean accept(File dir, String name) { + return name.startsWith("meta") && name.endsWith(Constants.FILENAME_SUFFIX); + } + } + private static PostFilter _postfilter = new PostFilter(); + private static class PostFilter implements FilenameFilter { + public boolean accept(File dir, String name) { + return name.startsWith("post") && name.endsWith(Constants.FILENAME_SUFFIX); + } + } + + private void processListBan(DBClient client, UI ui, Opts opts) { + List chans = client.getBannedChannels(); + ui.statusMessage("Total of " + chans.size() + " banned channels"); + for (int i = 0; i < chans.size(); i++) { + Hash chan = (Hash)chans.get(i); + ui.statusMessage(i + ": banned channel " + chan.toBase64()); + } + ui.commandComplete(0, null); + } + private void processUnban(DBClient client, UI ui, Opts opts) { + String scope = opts.getOptValue("scope"); + if (scope == null) { + ui.errorMessage("Usage: unban [--scope $index|$chanHash]"); + ui.commandComplete(0, null); + return; + } + int index = (int)opts.getOptLong("scope", -1); + if (index >= 0) { + List chans = client.getBannedChannels(); + if (index >= chans.size()) { + ui.errorMessage("Channel out of range - only " + chans.size() + " banned channels"); + ui.commandComplete(-1, null); + return; + } else { + Hash chan = (Hash)chans.get(index); + client.unban(chan); + ui.statusMessage("Channel " + chan.toBase64() + " unbanned"); + ui.commandComplete(0, null); + return; + } + } else { + byte chan[] = Base64.decode(scope); + if ( (chan != null) && (chan.length == Hash.HASH_LENGTH) ) { + client.unban(new Hash(chan)); + ui.statusMessage("Channel " + scope + " unbanned"); + ui.commandComplete(0, null); + } else { + ui.errorMessage("Channel specified is not valid [" + scope + "]"); + ui.commandComplete(-1, null); + } + } + } + + private void processBuildIndex(DBClient client, UI ui, Opts opts) { + File archiveDir = client.getArchiveDir(); + ArchiveIndex index; + try { + // load the whole index into memory + index = ArchiveIndex.buildIndex(client, ui, archiveDir, opts.getOptLong("maxSize", ArchiveIndex.DEFAULT_MAX_SIZE)); + // iterate across each channel, building their index-all and index-new files + // as well as pushing data into the overall index-all, index-new, and index-meta files + FileOutputStream outFullAll = new FileOutputStream(new File(archiveDir, "index-all.dat")); + FileOutputStream outFullNew = new FileOutputStream(new File(archiveDir, "index-new.dat")); + FileOutputStream outFullMeta = new FileOutputStream(new File(archiveDir, "index-meta.dat")); + FileOutputStream outFullUnauth = new FileOutputStream(new File(archiveDir, "index-unauthorized.dat")); + for (int i = 0; i < index.getChannelCount(); i++) { + ArchiveChannel chan = index.getChannel(i); + File chanDir = new File(archiveDir, Base64.encode(chan.getScope())); + FileOutputStream outAll = new FileOutputStream(new File(chanDir, "index-all.dat")); + FileOutputStream outNew = new FileOutputStream(new File(chanDir, "index-new.dat")); + FileOutputStream outUnauth = new FileOutputStream(new File(chanDir, "index-unauthorized.dat")); + write(outAll, chan, false); + write(outNew, chan, true); + write(outFullAll, chan, false); + write(outFullNew, chan, true); + write(outFullMeta, chan); + writeUnauth(outUnauth, chan); + writeUnauth(outFullUnauth, chan); + outAll.close(); + outNew.close(); + } + outFullMeta.close(); + outFullNew.close(); + outFullAll.close(); + outFullUnauth.close(); + ui.statusMessage("Index rebuilt"); + } catch (IOException ioe) { + ui.errorMessage("Error building the index", ioe); + } + ui.commandComplete(0, null); + } + + private void write(OutputStream out, ArchiveChannel chan) throws IOException { + write(out, chan, false, true); + } + private void write(OutputStream out, ArchiveChannel chan, boolean newOnly) throws IOException { + write(out, chan, newOnly, false); + } + private void write(OutputStream out, ArchiveChannel chan, boolean newOnly, boolean chanOnly) throws IOException { + chan.write(out, newOnly, chanOnly, false); + } + private void writeUnauth(OutputStream out, ArchiveChannel chan) throws IOException { + chan.write(out, true, false, true); + } + + private static final SimpleDateFormat _fmt = new SimpleDateFormat("yyyy/MM/dd", Locale.UK); + private static final String when(long when) { + synchronized (_fmt) { + return _fmt.format(new Date(when)); + } + } +} diff --git a/src/syndie/db/SyndieURIDAO.java b/src/syndie/db/SyndieURIDAO.java new file mode 100644 index 0000000..e86e482 --- /dev/null +++ b/src/syndie/db/SyndieURIDAO.java @@ -0,0 +1,162 @@ +package syndie.db; + +import java.net.URISyntaxException; +import java.sql.*; +import java.util.*; +import syndie.data.SyndieURI; +import syndie.Constants; +import net.i2p.util.Log; + +public class SyndieURIDAO { + private Log _log; + private DBClient _client; + public SyndieURIDAO(DBClient client) { + _client = client; + _log = client.ctx().logManager().getLog(SyndieURIDAO.class); + } + + private static final String KEY_TYPE = "__TYPE"; + + private static final String SQL_FETCH = "SELECT attribKey, attribValString, attribValLong, attribValBool, attribValStrings FROM uriAttribute WHERE uriId = ?"; + public SyndieURI fetch(long uriId) { + PreparedStatement stmt = null; + Map attribs = new TreeMap(); + String type = null; + try { + stmt = _client.con().prepareStatement(SQL_FETCH); + stmt.setLong(1, uriId); + ResultSet rs = stmt.executeQuery(); + while (rs.next()) { + String key = rs.getString(1); + String valStr = rs.getString(2); + if (!rs.wasNull()) { + if (KEY_TYPE.equals(key)) + type = valStr; + else + attribs.put(key, valStr); + } else { + long valLong = rs.getLong(3); + if (!rs.wasNull()) { + attribs.put(key, new Long(valLong)); + } else { + boolean valBool = rs.getBoolean(4); + if (!rs.wasNull()) { + attribs.put(key, new Boolean(valBool)); + } else { + String valStrings = rs.getString(5); + if (!rs.wasNull()) { + String vals[] = Constants.split('\n', valStrings); //valStrings.split("\n"); + attribs.put(key, vals); + } else { + // all null + } + } + } + } + } + } catch (SQLException se) { + se.printStackTrace(); + } finally { + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + if (_log.shouldLog(Log.DEBUG)) + _log.debug("URI found for " + uriId + ": " + type + ":" + attribs); + return new SyndieURI(type, attribs); + } + + private static final String SQL_NEXTID = "SELECT NEXT VALUE FOR uriIdSequence FROM information_schema.system_sequences WHERE SEQUENCE_NAME = 'URIIDSEQUENCE'"; + private long nextId() { + PreparedStatement stmt = null; + try { + stmt = _client.con().prepareStatement(SQL_NEXTID); + ResultSet rs = stmt.executeQuery(); + if (rs.next()) { + long rv = rs.getLong(1); + if (rs.wasNull()) + return -1; + else + return rv; + } else { + return -1; + } + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error retrieving the next uri ID", se); + return -1; + } finally { + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + + + private static final String SQL_INSERT = "INSERT INTO uriAttribute (attribKey, attribValString, attribValLong, attribValBool, attribValStrings, uriId, isDescriptive) VALUES (?, ?, ?, ?, ?, ?, ?)"; + public long add(SyndieURI uri) { + long id = nextId(); + if (id < 0) + return id; + PreparedStatement stmt = null; + try { + stmt = _client.con().prepareStatement(SQL_INSERT); + + String type = uri.getType(); + insertAttrib(stmt, KEY_TYPE, type, null, null, null, id, false); + if (_log.shouldLog(Log.DEBUG)) + _log.debug("URI " + id + " added with type " + type); + Map attributes = uri.getAttributes(); + for (Iterator iter = attributes.keySet().iterator(); iter.hasNext(); ) { + String key = (String)iter.next(); + Object val = attributes.get(key); + if (val.getClass().isArray()) { + String vals[] = (String[])val; + insertAttrib(stmt, key, null, null, null, vals, id, false); + } else if (val instanceof Long) { + insertAttrib(stmt, key, null, (Long)val, null, null, id, false); + } else if (val instanceof Boolean) { + insertAttrib(stmt, key, null, null, (Boolean)val, null, id, false); + } else { + insertAttrib(stmt, key, val.toString(), null, null, null, id, false); + } + if (_log.shouldLog(Log.DEBUG)) + _log.debug("URI attribute " + key + " added to " + id); + } + return id; + } catch (SQLException se) { + if (_log.shouldLog(Log.ERROR)) + _log.error("Error adding the uri", se); + return -1; + } finally { + if (stmt != null) try { stmt.close(); } catch (SQLException se) {} + } + } + private void insertAttrib(PreparedStatement stmt, String key, String valString, Long valLong, Boolean valBool, String valStrings[], long id, boolean isDescriptive) throws SQLException { + //"INSERT INTO uriAttribute + // (attribKey, attribValString, attribValLong, attribValBool, attribValStrings, uriId, isDescriptive) + // VALUES (?, ?, ?, ?, ?, ?, ?)"; + stmt.setString(1, key); + if (valString != null) + stmt.setString(2, valString); + else + stmt.setNull(2, Types.VARCHAR); + if (valLong != null) + stmt.setLong(3, valLong.longValue()); + else + stmt.setNull(3, Types.BIGINT); + if (valBool != null) + stmt.setBoolean(4, valBool.booleanValue()); + else + stmt.setNull(4, Types.BOOLEAN); + if (valStrings != null) { + StringBuffer buf = new StringBuffer(64); + for (int i = 0; i < valStrings.length; i++) + buf.append(valStrings[i]).append('\n'); + stmt.setString(5, buf.toString()); + } else { + stmt.setNull(5, Types.VARCHAR); + } + stmt.setLong(6, id); + stmt.setBoolean(7, isDescriptive); + int rows = stmt.executeUpdate(); + if (rows != 1) + throw new SQLException("Insert added "+rows+" rows"); + } +} diff --git a/src/syndie/db/TextEngine.java b/src/syndie/db/TextEngine.java new file mode 100644 index 0000000..b9184b2 --- /dev/null +++ b/src/syndie/db/TextEngine.java @@ -0,0 +1,777 @@ +package syndie.db; + +import java.io.File; +import java.sql.SQLException; +import java.text.SimpleDateFormat; +import java.util.*; +import net.i2p.I2PAppContext; +import net.i2p.data.Hash; +import syndie.Constants; +import syndie.Version; +import syndie.data.SyndieURI; + +public class TextEngine { + private UI _ui; + private boolean _exit; + private DBClient _client; + private List _menus; + private String _currentMenu; + private String _rootFile; + private File _rootDir; + private File _dbDir; + private File _tmpDir; + private File _archiveDir; + private File _outboundDir; + private File _logDir; + private NestedGobbleUI _gobbleUI; + private UI _realUI; + private List _commandHistory; + + public TextEngine(String rootDir, UI ui) { + _realUI = new MenuUI(ui); + _ui = _realUI; + _gobbleUI = new NestedGobbleUI(_realUI); + _exit = false; + _rootFile = rootDir; + _commandHistory = new ArrayList(); + rebuildMenus(); + buildInstallDir(); + } + + /** clear all the old state in the various menus, and put us back at the not-logged-in menu */ + private void rebuildMenus() { + _menus = new ArrayList(); + _menus.add(new StartMenu()); + _menus.add(new LoggedInMenu()); + _menus.add(new ReadMenu(this)); + _menus.add(new ManageMenu(this)); + _menus.add(new PostMenu(this)); + _menus.add(new SyndicateMenu(this)); + _currentMenu = StartMenu.NAME; + } + + public void run() { + while (!_exit) { + if (runStep()) { + // keep going + } else { + break; + } + } + _ui.statusMessage("Syndie engine exiting"); + } + public boolean runStep() { + Opts opts = _ui.readCommand(); + if (opts == null) return false; + String cmdStr = opts.getCommand(); + boolean ignored = true; + String origLine = opts.getOrigLine(); + if ( (cmdStr == null) || (cmdStr.trim().startsWith("--")) ) { + // noop + } else if (processMeta(opts) || processMenu(opts)) { + ignored = false; + if (origLine.startsWith("!") || (origLine.startsWith("^"))) + ignored = true; + } else { + CLI.Command cmd = CLI.getCommand(opts.getCommand()); + if (cmd == null) { + if ( (_client != null) && (_client.getLoggedInNymId() >= 0) ) { + Map aliases = _client.getAliases(_client.getLoggedInNymId()); + String value = (String)aliases.get(opts.getCommand()); + if (value != null) { + executeAlias(value); + return true; + } + } + unknownCommand(opts.getCommand()); + _ui.commandComplete(-1, null); + } else { + ignored = false; + _client = cmd.runCommand(opts, _ui, _client); + if ( (_client == null) || (!_client.isLoggedIn()) ) + rebuildMenus(); + } + } + if (!ignored) + _commandHistory.add(origLine); + return true; + } + + private void processLogout() { + if (_client != null) + _client.close(); + rebuildMenus(); + } + + private void buildInstallDir() { + _rootDir = new File(_rootFile); + _dbDir = new File(_rootDir, "db"); + _tmpDir = new File(_rootDir, "tmp"); + _archiveDir = new File(_rootDir, "archive"); + _outboundDir = new File(_rootDir, "outbound"); + _logDir = new File(_rootDir, "logs"); + + boolean dbDirCreated = false; + if (!_rootDir.exists()) _rootDir.mkdirs(); + if (!_dbDir.exists()) { _dbDir.mkdir(); dbDirCreated = true; } + if (!_tmpDir.exists()) _tmpDir.mkdir(); + if (!_archiveDir.exists()) _archiveDir.mkdir(); + if (!_outboundDir.exists()) _outboundDir.mkdir(); + if (!_logDir.exists()) _logDir.mkdir(); + + if (dbDirCreated) { + // so it doesn't gather 'command completed'/etc messages on the screen + _ui.insertCommand("gobble"); + _ui.insertCommand("init"); + //--root '" + _rootFile + "' + _ui.insertCommand("register --db '" + getDefaultURL() + "' --login " + DEFAULT_LOGIN + " --pass '" + DEFAULT_PASS + "' --name 'Default account'"); + _ui.insertCommand("ungobble"); + } + + /* + $base/db/syndie.* + tmp/ + archive/index.txt + $scopeHash/meta.snd + /$msgId.snd + outbound/$scopeHash/meta.snd + /$msgId.snd + logs/ + lib/{mini-i2p.jar,hsqldb_gcj.jar,syndie.jar} + bin/{runtext.sh,runcli.sh} + */ + } + public String getDBFile() { return _dbDir.getPath() + File.separator + "syndie"; } + public static String getRootPath() { return System.getProperty("user.home") + File.separator + ".syndie"; } + public DBClient getClient() { return _client; } + + static final String DEFAULT_LOGIN = "user"; + static final String DEFAULT_PASS = "pass"; + + private String getDefaultURL() { return "jdbc:hsqldb:file:" + getDBFile() + ";hsqldb.nio_data_file=false"; } + + private void processLogin(Opts opts) { + String db = opts.getOptValue("db"); + String login = opts.getOptValue("login"); + String pass = opts.getOptValue("pass"); + + if (db == null) + db = getDefaultURL(); + if (login == null) { + login = DEFAULT_LOGIN; + pass = DEFAULT_PASS; + } + + if (_client == null) + _client = new DBClient(I2PAppContext.getGlobalContext(), _rootDir); + else + _client.close(); + try { + if (pass == null) + pass = ""; + _ui.debugMessage("Attempting to log into [" + db + "] w/ ["+login + "]=["+pass +"]"); + long nymId = _client.connect(db, login, pass); + if (nymId >= 0) { + _ui.statusMessage("Login successful (nymId " + nymId + ")"); + rebuildMenus(); + _currentMenu = LoggedInMenu.NAME; + + Properties prefs = _client.getNymPrefs(nymId); + doSetPrefs(prefs); + } else { + _ui.statusMessage("Login failed"); + rebuildMenus(); + } + } catch (SQLException se) { + // *UUUUGLY* + String msg = se.getMessage(); + // "org.hsqldb.HsqlException: The database is already in use by another + // process: org.hsqldb.persist.NIOLockFile@1f4e3045[ + // file =/mnt/data/ux/.syndie/db/syndie.lck, exists=true, locked=false, + // valid=false, fl=null ]: java.lang.Exception: checkHeartbeat(): + // lock file [/mnt/data/ux/.syndie/db/syndie.lck] is presumably + // locked by another process." + // out of all that, checkHeartbeat is probably the only part that isn't + // internationalized (and specifically refers to not being able to log in) + if ( (msg != null) && (msg.indexOf("checkHeartbeat()") >= 0) ) { + _ui.debugMessage("Unable to log in", se); + _ui.errorMessage("Unable to log in, as there is already another"); + _ui.errorMessage("syndie instance accessing that database."); + } else { + _ui.errorMessage("Error trying to login", se); + } + } + } + private void processSwitchMenu(Opts opts) { + String targetMenu = null; + if (opts.size() > 0) + targetMenu = opts.getArg(0); + if ( (_client == null) || (!_client.isLoggedIn()) ) { + if ( (targetMenu != null) && (StartMenu.NAME.equals(targetMenu)) ) { + // leave it be + } else { + // not logged in, so shove 'em to the start + targetMenu = null; + } + } + if (targetMenu != null) { + for (int i = 0; i < _menus.size(); i++) { + Menu cur = (Menu)_menus.get(i); + if (cur.getName().equals(targetMenu)) { + _currentMenu = targetMenu; + break; + } + } + } + if (targetMenu == null) { + _ui.statusMessage("Available menus: "); + boolean loggedIn = (_client != null) && (_client.isLoggedIn()); + for (int i = 0; i < _menus.size(); i++) { + Menu menu = (Menu)_menus.get(i); + if (!menu.requireLoggedIn() || loggedIn) + _ui.statusMessage(" " + menu.getName() + padBlank(menu.getName(), 16) + "(" + menu.getDescription() + ")"); + /* + _ui.statusMessage(" manage (to manage channels)"); + _ui.statusMessage(" read (to read posts)"); + _ui.statusMessage(" priv (to read private messages)"); + _ui.statusMessage(" post (to create private messages or posts)"); + _ui.statusMessage(" archive (archive management)"); + _ui.statusMessage(" key (key management)"); + _ui.statusMessage(" search (search through messages)"); + _ui.statusMessage(" watched (review and manage favorite channels/tags/resources)"); + _ui.statusMessage(" sql (advanced SQL interface to the backend database)"); + */ + } + } + } + private static String padBlank(String name, int paddedSize) { + StringBuffer buf = new StringBuffer(); + int pad = paddedSize - name.length(); + for (int i = 0; i < pad; i++) + buf.append(' '); + return buf.toString(); + } + private Menu getCurrentMenu() { + for (int i = 0; i < _menus.size(); i++) { + Menu menu = (Menu)_menus.get(i); + if (menu.getName().equals(_currentMenu)) + return menu; + } + return null; + } + /** + * Process any menu commands, returning true if the command was + * a handled meta command, false if not + */ + private boolean processMenu(Opts opts) { + String cmd = opts.getCommand(); + if ("logout".equalsIgnoreCase(cmd)) { + processLogout(); + _ui.commandComplete(0, null); + return true; + } else if ("menu".equalsIgnoreCase(cmd)) { + processSwitchMenu(opts); + _ui.commandComplete(0, null); + return true; + } else if ("up".equalsIgnoreCase(cmd)) { + if (_currentMenu != StartMenu.NAME) + _currentMenu = LoggedInMenu.NAME; + _ui.commandComplete(0, null); + return true; + } else if ("prefs".equalsIgnoreCase(cmd)) { + if (_currentMenu != StartMenu.NAME) + processPrefs(opts); + return true; + } else if ("version".equalsIgnoreCase(cmd)) { + _ui.statusMessage("Syndie version: " + Version.VERSION + " (http://syndie.i2p.net/)"); + _ui.commandComplete(0, null); + return true; + } else { + Menu menu = getCurrentMenu(); + if (menu != null) + return menu.processCommands(_client, _ui, opts); + return false; + } + } + + /** + * Process any meta commands (configuring the text engine), returning true + * if the command was a handled meta command, false if not + */ + private boolean processMeta(Opts opts) { + String cmd = opts.getCommand(); + if (cmd == null) + cmd = ""; + if ("exit".equalsIgnoreCase(cmd) || "quit".equalsIgnoreCase(cmd)) { + processLogout(); + _ui.commandComplete(0, null); + _exit = true; + return true; + } else if ("gobble".equalsIgnoreCase(cmd)) { + _ui = _gobbleUI; + _ui.statusMessage("Gobbling all normal status messages (until you \"ungobble\")"); + //_ui.commandComplete(0, null); + return true; + } else if ("ungobble".equalsIgnoreCase(cmd)) { + _ui.statusMessage("No longer gobbling normal status messages"); + _ui = _realUI; + //_ui.commandComplete(0, null); + return true; + } else if ("togglePaginate".equalsIgnoreCase(cmd)) { + boolean newState = _ui.togglePaginate(); + if (newState) + _ui.statusMessage("Paginating the output every 10 lines"); + else + _ui.statusMessage("Not paginating the output"); + _ui.commandComplete(0, null); + return true; + } else if ("toggleDebug".equalsIgnoreCase(cmd)) { + boolean newState = _ui.toggleDebug(); + if (newState) + _ui.statusMessage("Displaying debug messages (and logging them to debug.log)"); + else + _ui.statusMessage("Not displaying debug messages"); + _ui.commandComplete(0, null); + return true; + } else if ("init".equalsIgnoreCase(cmd)) { + processInit(opts); + rebuildMenus(); + return true; + } else if ("builduri".equalsIgnoreCase(cmd)) { + processBuildURI(opts); + return true; + } else if ("history".equalsIgnoreCase(cmd)) { + processHistory(opts); + return true; + } else if (cmd.startsWith("!")) { + processHistoryBang(opts); + return true; + } else if (cmd.startsWith("^")) { + processHistoryReplace(opts); + return true; + } else if ("alias".equalsIgnoreCase(cmd)) { + processAlias(opts); + return true; + } else if ("?".equalsIgnoreCase(cmd) || "help".equalsIgnoreCase(cmd)) { + help(); + _ui.commandComplete(0, null); + return true; + } else { + return false; + } + } + + private void processHistory(Opts opts) { + for (int i = 0; i < _commandHistory.size(); i++) + _ui.statusMessage((i+1) + ": " + (String)_commandHistory.get(i)); + } + /** deal with !!, !123, and !-123 */ + private void processHistoryBang(Opts opts) { + String cmd = opts.getCommand(); + if (cmd.startsWith("!!")) { + if (_commandHistory.size() > 0) { + String prevCmd = (String)_commandHistory.get(_commandHistory.size()-1); + _ui.insertCommand(prevCmd); + } else { + _ui.errorMessage("No commands in the history buffer"); + _ui.commandComplete(-1, null); + } + } else { + try { + if (cmd.length() > 1) { + int num = Integer.parseInt(cmd.substring(1)); + if (num < 0) + num = _commandHistory.size() + num; + num--; + if (_commandHistory.size() > num) { + _ui.insertCommand((String)_commandHistory.get(num)); + } else { + _ui.errorMessage("Command history element out of range"); + _ui.commandComplete(-1, null); + } + } else { + _ui.errorMessage("Usage: !$num or !-$num"); + _ui.commandComplete(-1, null); + } + } catch (NumberFormatException nfe) { + _ui.errorMessage("Usage: !$num or !-$num"); + _ui.commandComplete(-1, null); + } + } + } + /** deal with ^a[^b] */ + private void processHistoryReplace(Opts opts) { + if (_commandHistory.size() > 0) { + String prev = (String)_commandHistory.get(_commandHistory.size()-1); + String cmd = opts.getCommand(); + String orig = null; + String replacement = null; + int searchEnd = cmd.indexOf('^', 1); + if (searchEnd < 0) { + orig = cmd.substring(1); + } else { + orig = cmd.substring(1, searchEnd); + replacement = cmd.substring(searchEnd+1); + } + String newVal = replace(prev, orig, replacement, 1); + _ui.insertCommand(newVal); + } else { + _ui.errorMessage("No history to mangle"); + _ui.commandComplete(-1, null); + } + } + + private static final String replace(String orig, String oldval, String newval, int howManyReplacements) { + if ( (orig == null) || (oldval == null) || (oldval.length() <= 0) ) return orig; + + StringBuffer rv = new StringBuffer(); + char origChars[] = orig.toCharArray(); + char search[] = oldval.toCharArray(); + int numReplaced = 0; + for (int i = 0; i < origChars.length; i++) { + boolean match = true; + if (howManyReplacements <= numReplaced) + match = false; // matched enough, stop + for (int j = 0; match && j < search.length && (j + i < origChars.length); j++) { + if (search[j] != origChars[i+j]) + match = false; + } + if (match) { + if (newval != null) + rv.append(newval); + i += search.length-1; + numReplaced++; + } else { + rv.append(origChars[i]); + } + } + return rv.toString(); + } + + private void processAlias(Opts opts) { + List args = opts.getArgs(); + if (args.size() <= 0) { + displayAliases(); + } else { + String name = (String)args.get(0); + StringBuffer buf = new StringBuffer(); + for (int i = 1; i < args.size(); i++) { + String str = (String)args.get(i); + buf.append(str).append(" "); + } + String value = buf.toString().trim(); + _client.addAlias(_client.getLoggedInNymId(), name, value); + if (value.length() == 0) + _ui.statusMessage("Alias removed for '" + name + "'"); + else + _ui.statusMessage("New alias for '" + name + "': " + value); + } + } + + private void displayAliases() { + Map aliases = _client.getAliases(_client.getLoggedInNymId()); + for (Iterator iter = aliases.keySet().iterator(); iter.hasNext(); ) { + String name = (String)iter.next(); + String value = (String)aliases.get(name); + _ui.statusMessage("Alias '" + name + "': " + value); + } + } + + private void executeAlias(String aliasedValue) { + String cmds[] = Constants.split(';', aliasedValue); + for (int i = 0; i < cmds.length; i++) { + _ui.debugMessage("aliased command " + i + ": " + cmds[i]); + _ui.insertCommand(cmds[i]); + } + } + + private void unknownCommand(String cmd) { + _ui.errorMessage("Command unknown: " + cmd); + _ui.errorMessage("Type ? for help"); + } + + private void help() { + _ui.statusMessage("Commands: "); + Menu menu = getCurrentMenu(); + if (menu != null) { + menu.listCommands(_ui); + if (menu.requireLoggedIn()) + _ui.statusMessage(" logout : disconnect from the database, but do not exit syndie"); + if (!_currentMenu.equals(LoggedInMenu.NAME)) + _ui.statusMessage(" up : go up a menu"); + } + _ui.statusMessage(" init $jdbcURL : create a new syndie database"); + _ui.statusMessage(" menu [$newMenu] : switch between the menus, or view available menus"); + _ui.statusMessage(" builduri (--url $url | --channel $chanHash [--message $num [--page $num] )"); + _ui.statusMessage(" : helper method for building Syndie URIs"); + _ui.statusMessage(" toggleDebug : turn on or off debugging output"); + _ui.statusMessage(" togglePaginate : turn on or off output pagination"); + _ui.statusMessage(" prefs [--debug $boolean] [--paginate $boolean] "); + _ui.statusMessage(" [--httpproxyhost $hostname --httpproxyport $portNum]"); + _ui.statusMessage(" [--archive $archiveURL]"); + _ui.statusMessage(" : update or display the logged in nym's preferences"); + _ui.statusMessage(" exit : exit syndie"); + } + + private void processSQL(Opts opts) { + StringBuffer query = new StringBuffer(); + List args = opts.getArgs(); + for (int i = 0; i < args.size(); i++) + query.append(args.get(i).toString()).append(' '); + _client.exec(query.toString(), _ui); + } + + private List getMenuLocation() { + List rv = new ArrayList(); + Menu menu = getCurrentMenu(); + if (menu != null) { + if (menu.requireLoggedIn()) + rv.add("logged in as " + _client.getLogin()); + rv.addAll(menu.getMenuLocation(_client, _ui)); + } else { + _ui.debugMessage("No menu found, current = " + _currentMenu); + rv.add("logged out"); + } + return rv; + } + + private void processInit(Opts opts) { + List args = opts.getArgs(); + String url = getDefaultURL(); + if (args.size() == 1) + url = (String)args.get(0); + try { + _client = new DBClient(I2PAppContext.getGlobalContext(), _rootDir); + _client.connect(url); + //_client.close(); + _ui.statusMessage("Database created at " + url); + _ui.commandComplete(0, null); + return; + } catch (SQLException se) { + _ui.errorMessage("Error creating the database", se); + _ui.commandComplete(-1, null); + return; + } + } + + private static final SimpleDateFormat _backupFmt = new SimpleDateFormat("yyyy-MM-dd"); + private void processBackup(Opts opts) { + if ( (_client == null) || (!_client.isLoggedIn()) ) { + _ui.errorMessage("You must be logged in to backup the database"); + _ui.commandComplete(-1, null); + return; + } + String out = opts.getOptValue("out"); + if ( (out == null) || (out.length() <= 0) ) { + _ui.errorMessage("Usage: backup --out $filename [--includeArchive $boolean]"); + _ui.commandComplete(-1, null); + return; + } + int dateBegin = out.indexOf("DATE"); + if (dateBegin >= 0) { + String pre = ""; + String post = ""; + if (dateBegin > 0) + pre = out.substring(0, dateBegin); + if (dateBegin < out.length()-4) + post = out.substring(0, dateBegin); + synchronized (_backupFmt) { + out = pre + _backupFmt.format(new Date(System.currentTimeMillis())) + post; + } + } + boolean includeArchive = opts.getOptBoolean("includeArchive", false); + _client.backup(_ui, out, includeArchive); + } + + private void processBuildURI(Opts opts) { + SyndieURI uri = null; + String url = opts.getOptValue("url"); + if (url != null) { + uri = SyndieURI.createURL(url); + } else { + byte chan[] = opts.getOptBytes("channel"); + if ( (chan != null) && (chan.length == Hash.HASH_LENGTH) ) { + long msgId = opts.getOptLong("message", -1); + if (msgId >= 0) { + long page = opts.getOptLong("page", -1); + if (page >= 0) { + uri = SyndieURI.createMessage(new Hash(chan), msgId, (int)page); + } else { + uri = SyndieURI.createMessage(new Hash(chan), msgId); + } + } else { + uri = SyndieURI.createScope(new Hash(chan)); + } + } else { + String archive = opts.getOptValue("archive"); + String pass = opts.getOptValue("pass"); + if (archive != null) + uri = SyndieURI.createArchive(archive, pass); + } + } + + if (uri != null) { + _ui.statusMessage("Encoded Syndie URI: " + uri.toString()); + _ui.commandComplete(0, null); + } else { + _ui.errorMessage("Could not build the Syndie URI"); + _ui.commandComplete(-1, null); + } + } + + private void processPrefs(Opts opts) { + Properties prefs = _client.getNymPrefs(_client.getLoggedInNymId()); + if (opts.getOptNames().size() > 0) { + // some were set, so actually adjust things rather than simply display + for (Iterator iter = opts.getOptNames().iterator(); iter.hasNext(); ) { + String name = (String)iter.next(); + String val = opts.getOptValue(name); + if ( (val == null) || (val.length() <= 0) ) + prefs.remove(name); + else + prefs.setProperty(name, val); + } + } else { + //System.out.println("Prefs have no opts, defaults are: " + prefs); + } + _client.setNymPrefs(_client.getLoggedInNymId(), prefs); + doSetPrefs(prefs); + _ui.commandComplete(0, null); + } + + private void doSetPrefs(Properties prefs) { + String dbgVal = prefs.getProperty("debug"); + if (dbgVal != null) { + boolean debug = Boolean.valueOf(dbgVal).booleanValue(); + boolean isNowDebug = _ui.toggleDebug(); + if (isNowDebug) { + if (debug) { + // already debugging + } else { + _ui.toggleDebug(); + } + } else { + if (debug) { + _ui.toggleDebug(); + } else { + // already not debugging + } + } + _ui.statusMessage("Preference: display debug messages? " + debug); + } + String paginateVal = prefs.getProperty("paginate"); + if (paginateVal != null) { + boolean paginate = Boolean.valueOf(paginateVal).booleanValue(); + boolean isNowPaginate = _ui.togglePaginate(); + if (isNowPaginate) { + if (paginate) { + // already paginating + } else { + _ui.togglePaginate(); + } + } else { + if (paginate) { + _ui.togglePaginate(); + } else { + // already not paginating + } + } + _ui.statusMessage("Preference: paginate output? " + paginate); + } + _client.setDefaultHTTPProxyHost(prefs.getProperty("httpproxyhost")); + String port = prefs.getProperty("httpproxyport"); + if (port != null) { + try { + int num = Integer.parseInt(port); + _client.setDefaultHTTPProxyPort(num); + } catch (NumberFormatException nfe) { + _ui.errorMessage("HTTP proyx port preference is invalid", nfe); + _client.setDefaultHTTPProxyPort(-1); + } + } else { + _client.setDefaultHTTPProxyPort(-1); + } + _client.setDefaultHTTPArchive(prefs.getProperty("archive")); + + if ( (_client.getDefaultHTTPProxyHost() != null) && (_client.getDefaultHTTPProxyPort() > 0) ) + _ui.statusMessage("Preference: default HTTP proxy: " + _client.getDefaultHTTPProxyHost() + ":" + _client.getDefaultHTTPProxyPort()); + else + _ui.statusMessage("Preference: default HTTP proxy: none"); + if (_client.getDefaultHTTPArchive() != null) + _ui.statusMessage("Preference: default archive: " + _client.getDefaultHTTPArchive()); + else + _ui.statusMessage("Preference: default archive: none"); + } + + private class MenuUI extends NestedUI { + public MenuUI(UI ui) { super(ui); } + public void commandComplete(int status, List location) { + _real.commandComplete(status, getMenuLocation()); + } + } + + public interface Menu { + public String getName(); + public String getDescription(); + public boolean requireLoggedIn(); + public void listCommands(UI ui); + public boolean processCommands(DBClient client, UI ui, Opts opts); + public List getMenuLocation(DBClient client, UI ui); + } + + public class StartMenu implements Menu { + public static final String NAME = "start"; + public String getName() { return NAME; } + public String getDescription() { return "root syndie menu"; } + public boolean requireLoggedIn() { return false; } + public void listCommands(UI ui) { + ui.statusMessage(" login [--db $jdbcURL] [--login $nymLogin --pass $nymPass]"); + ui.statusMessage(" restore --in $file [--db $jdbcURL]: restore the database"); + } + public boolean processCommands(DBClient client, UI ui, Opts opts) { + if ("login".equalsIgnoreCase(opts.getCommand())) { + processLogin(opts); + _ui.commandComplete(0, null); + return true; + } else if ("restore".equalsIgnoreCase(opts.getCommand())) { + String in = opts.getOptValue("in"); + String db = opts.getOptValue("db"); + if (db == null) + db = getDefaultURL(); + if (client == null) { + client = new DBClient(I2PAppContext.getGlobalContext(), new File(_rootFile)); + } + client.restore(ui, in, db); + return true; + } else { + return false; + } + } + public List getMenuLocation(DBClient client, UI ui) { return Collections.EMPTY_LIST; } + } + + public class LoggedInMenu implements Menu { + public static final String NAME = "loggedin"; + public String getName() { return NAME; } + public String getDescription() { return "logged in menu"; } + public boolean requireLoggedIn() { return true; } + public void listCommands(UI ui) { + ui.statusMessage(" register [--db $jdbcURL] --login $nymLogin --pass $nymPass --name $nymName"); + ui.statusMessage(" sql $sqlQueryStatement"); + ui.statusMessage(" backup --out $file [--includeArchive $boolean]"); + ui.statusMessage(" : back up the database to the given (compressed) file,"); + ui.statusMessage(" : optionally including the signed archive files"); + } + public boolean processCommands(DBClient client, UI ui, Opts opts) { + if ("sql".equalsIgnoreCase(opts.getCommand())) { + processSQL(opts); + return true; + } else if ("backup".equalsIgnoreCase(opts.getCommand())) { + processBackup(opts); + return true; + } + return false; + } + public List getMenuLocation(DBClient client, UI ui) { return Collections.EMPTY_LIST; } + } +} diff --git a/src/syndie/db/TextUI.java b/src/syndie/db/TextUI.java new file mode 100644 index 0000000..c9379c2 --- /dev/null +++ b/src/syndie/db/TextUI.java @@ -0,0 +1,220 @@ +package syndie.db; + +import java.io.*; +import java.util.*; +import net.i2p.data.DataHelper; +import syndie.Constants; + +/** + * Main scriptable text UI + */ +public class TextUI implements UI { + private boolean _debug = false; + private boolean _paginate = true; + private List _insertedCommands; + private int _linesSinceInput; + private PrintStream _debugOut; + private BufferedReader _in; + + /** @param wantsDebug if true, we want to display debug messages */ + public TextUI(boolean wantsDebug) { + _debug = wantsDebug; + _insertedCommands = new ArrayList(); + try { + _in = new BufferedReader(new InputStreamReader(System.in, "UTF-8")); + try { + _debugOut = new PrintStream(new FileOutputStream("debug.log"), true); + } catch (IOException ioe) { + _debugOut = new PrintStream(new NullOutputStream()); + } + } catch (UnsupportedEncodingException uee) { + errorMessage("internal error, your JVM doesn't support UTF-8?", uee); + throw new RuntimeException("Broken JVM"); + } catch (IOException ioe) { + ioe.printStackTrace(); + } + } + private static final class NullOutputStream extends OutputStream { + public void write(int b) {} + } + private void display(String msg) { display(msg, true); } + private void display(String msg, boolean nl) { + if (nl) + System.out.println(msg); + else + System.out.print(msg); + if (_debug) { + if (nl) + _debugOut.println(msg); + else + _debugOut.print(msg); + } + } + private void display(Exception e) { + e.printStackTrace(); + if (_debug) + e.printStackTrace(_debugOut); + } + + private String readLine() { + try { + return _in.readLine(); + } catch (IOException ioe) { + errorMessage("Error reading STDIN", ioe); + return ""; + } + } + + public Opts readCommand() { return readCommand(true); } + public Opts readCommand(boolean displayPrompt) { + Opts rv = null; + while (rv == null) { + if (displayPrompt) + display("* Next command: ", false); + try { + _linesSinceInput = 0; + String line = null; + if (_insertedCommands.size() == 0) { + line = readLine(); //DataHelper.readLine(System.in); + debugMessage("command line read [" + line + "]"); + } else { + line = (String)_insertedCommands.remove(0); + line = line.trim(); + debugMessage("command line inserted [" + line + "]"); + } + if (line == null) { + // EOF, so assume "exit" + rv = new Opts("exit"); + } else if (line.startsWith("#")) { + // skip comment lines + rv = null; + } else { + rv = new Opts(line); + if (!rv.getParseOk()) { + errorMessage("Error parsing the command [" + line + "]"); + rv = null; + } + } + } catch (Exception e) { + errorMessage("Error parsing the command", e); + } + } + return rv; + } + + public void errorMessage(String msg) { errorMessage(msg, null); } + public void errorMessage(String msg, Exception cause) { + //System.err.println(msg); + display(msg); + if (cause != null) { + display(cause); + } + } + + public void statusMessage(String msg) { + String lines[] = Constants.split('\n', msg); //msg.split("\n"); + if (lines != null) { + for (int i = 0; i < lines.length; i++) { + beforeDisplayLine(); + display(lines[i]); + } + } + } + public void debugMessage(String msg) { debugMessage(msg, null); } + public void debugMessage(String msg, Exception cause) { + if (!_debug) return; + if (msg != null) + display(msg); + if (cause != null) + display(cause); + } + public void commandComplete(int status, List location) { + display("* Command execution complete. "); + display("* Status: " + status); + StringBuffer buf = new StringBuffer(); + if (location != null) { + for (int i = 0; i < location.size(); i++) { + buf.append(location.get(i).toString()).append("> "); + } + } + display("* Location: " + buf.toString()); + } + public boolean toggleDebug() { _debug = !_debug; return _debug; } + public boolean togglePaginate() { _paginate = !_paginate; return _paginate; } + + private void beforeDisplayLine() { + _linesSinceInput++; + if (_paginate) { + if (_linesSinceInput > 10) { + System.out.print("[Hit enter to continue]"); + readLine(); + _linesSinceInput = 0; + } + } + } + + public void insertCommand(String cmd) { + if (cmd == null) return; + + // trim off any trailing newlines + while (cmd.length() > 0) { + char c = cmd.charAt(cmd.length()-1); + if ( (c == '\n') || (c == '\r') ) { + cmd = cmd.substring(0, cmd.length()-1); + } else { + if (cmd.length() > 0) + _insertedCommands.add(cmd); + return; + } + } + // blank line + return; + } + + public String readStdIn() { + StringBuffer buf = new StringBuffer(); + statusMessage("Reading standard input until a line containing a single \".\" is reached"); + String line = null; + while (true) { + if (_insertedCommands.size() == 0) + line = readLine(); + else + line = (String)_insertedCommands.remove(0); + + if ( (line == null) || ( (line.length() == 1) && (line.charAt(0) == '.') ) ) + break; + + buf.append(line).append('\n'); + } + return buf.toString(); + } + + public static void main(String args[]) { + System.setProperty("jbigi.dontLog", "true"); + System.setProperty("jcpuid.dontLog", "true"); + + String rootDir = TextEngine.getRootPath(); + String script = null; + for (int i = 0; i < args.length; i++) { + if (args[i].startsWith("@")) + script = args[i].substring(1); + else + rootDir = args[i]; + } + TextUI ui = new TextUI(false); + if (script != null) { + try { + BufferedReader in = new BufferedReader(new InputStreamReader(new FileInputStream(script), "UTF-8")); + String line = null; + while ( (line = in.readLine()) != null) + ui.insertCommand(line); + } catch (UnsupportedEncodingException uee) { + ui.errorMessage("internal error, your JVM doesn't support UTF-8?", uee); + } catch (IOException ioe) { + ui.errorMessage("Error running the script " + script, ioe); + } + } + TextEngine engine = new TextEngine(rootDir, ui); + engine.run(); + } +} diff --git a/src/syndie/db/ThreadAccumulator.java b/src/syndie/db/ThreadAccumulator.java new file mode 100644 index 0000000..8ad5ff6 --- /dev/null +++ b/src/syndie/db/ThreadAccumulator.java @@ -0,0 +1,219 @@ +package syndie.db; + +import java.util.*; +import java.sql.Connection; +import java.sql.PreparedStatement; +import java.sql.ResultSet; +import java.sql.SQLException; +import net.i2p.data.Hash; +import syndie.data.*; + +/** + * + */ +public class ThreadAccumulator { + private DBClient _client; + private UI _ui; + + private List _rootURIs; + /** one List of tags for each root URI, duplicates allowed */ + private List _threadTags; + /** Integer for each thread specifying how many messages are in the thread */ + private List _threadMessages; + /** String describing the subject of the thread */ + private List _threadSubject; + /** internal channel id of the thread root's author */ + private List _threadRootAuthorId; + /** internal channel id of the most recent post's author */ + private List _threadLatestAuthorId; + /** when (Long) the most recent post was made */ + private List _threadLatestPostDate; + + public ThreadAccumulator(DBClient client, UI ui) { + _client = client; + _ui = ui; + } + + private static final String SQL_LIST_THREADS_ALL = "SELECT msgId, scopeChannelId, authorChannelId, targetChannelId FROM channelMessage WHERE forceNewThread = TRUE OR msgId NOT IN (SELECT DISTINCT msgId FROM messageHierarchy)"; + private static final String SQL_LIST_THREADS_CHAN = "SELECT msgId, scopeChannelId, authorChannelId, targetChannelId FROM channelMessage WHERE (targetChannelId = ? OR scopeChannelId = ?) AND (forceNewThread = TRUE OR msgId NOT IN (SELECT DISTINCT msgId FROM messageHierarchy) )"; + public void gatherThreads(Set channelHashes, Set tagsRequired, Set tagsRejected) { + init(); + + // - iterate across all matching channels + // - list all threads in the channel + // - list all tags for each thread + // - filter threads per tags + + List rootMsgIds = new ArrayList(); + PreparedStatement stmt = null; + ResultSet rs = null; + try { + if (channelHashes == null) { + stmt = _client.con().prepareStatement(SQL_LIST_THREADS_ALL); + rs = stmt.executeQuery(); + while (rs.next()) { + // msgId, scopeChannelId, authorChannelId, targetChannelId + long msgId = rs.getLong(1); + if (rs.wasNull()) msgId = -1; + long scopeId = rs.getLong(2); + if (rs.wasNull()) scopeId = -1; + long authorId = rs.getLong(3); + if (rs.wasNull()) authorId = -1; + long targetId = rs.getLong(4); + if (rs.wasNull()) targetId = -1; + + //if (authorId >= 0) + // _threadRootAuthorId.add(new Long(authorId)); + //else + // _threadRootAuthorId.add(new Long(scopeId)); + rootMsgIds.add(new Long(msgId)); + } + _ui.debugMessage("Found root messageIds for all channels: " + rootMsgIds); + rs.close(); + rs = null; + stmt.close(); + stmt = null; + } else { + for (Iterator iter = channelHashes.iterator(); iter.hasNext(); ) { + Hash chan = (Hash)iter.next(); + long chanId = _client.getChannelId(chan); + stmt = _client.con().prepareStatement(SQL_LIST_THREADS_CHAN); + stmt.setLong(1, chanId); + stmt.setLong(2, chanId); + rs = stmt.executeQuery(); + while (rs.next()) { + // msgId, scopeChannelId, authorChannelId, targetChannelId + long msgId = rs.getLong(1); + if (rs.wasNull()) msgId = -1; + long scopeId = rs.getLong(2); + if (rs.wasNull()) scopeId = -1; + long authorId = rs.getLong(3); + if (rs.wasNull()) authorId = -1; + long targetId = rs.getLong(4); + if (rs.wasNull()) targetId = -1; + + //if (authorId >= 0) + // _threadRootAuthorId.add(new Long(authorId)); + //else + // _threadRootAuthorId.add(new Long(scopeId)); + rootMsgIds.add(new Long(msgId)); + } + rs.close(); + rs = null; + stmt.close(); + stmt = null; + + _ui.debugMessage("Found root messageIds including those for channel " + chan.toBase64() + ": " + rootMsgIds); + } // end iterating over channels + } // if (all channels) {} else {} + + // now find the relevent details for each thread + for (int i = 0; i < rootMsgIds.size(); i++) { + Long msgId = (Long)rootMsgIds.get(i); + MessageThreadBuilder builder = new MessageThreadBuilder(_client, _ui); + ReferenceNode root = builder.buildThread(_client.getMessage(msgId.longValue())); + // loads up the details (tags, etc), and if the thread matches the + // criteria, the details are added to _rootURIs, _threadMessages, etc + loadInfo(root, tagsRequired, tagsRejected); + } + } catch (SQLException se) { + _ui.errorMessage("Internal error accumulating threads", se); + } + } + + private void init() { + _rootURIs = new ArrayList(); + _threadTags = new ArrayList(); + _threadMessages = new ArrayList(); + _threadSubject = new ArrayList(); + _threadRootAuthorId = new ArrayList(); + _threadLatestAuthorId = new ArrayList(); + _threadLatestPostDate = new ArrayList(); + } + + public int getThreadCount() { return _rootURIs.size(); } + public SyndieURI getRootURI(int index) { return (SyndieURI)_rootURIs.get(index); } + /** sorted set of tags in the given thread */ + public Set getTags(int index) { return new TreeSet((List)_threadTags.get(index)); } + public int getTagCount(int index, String tag) { + int rv = 0; + if (tag == null) return 0; + List tags = (List)_threadTags.get(index); + if (tags == null) return 0; + for (int i = 0; i < tags.size(); i++) + if (tag.equals((String)tags.get(i))) + rv++; + return rv; + } + public int getMessages(int index) { return ((Integer)_threadMessages.get(index)).intValue(); } + public String getSubject(int index) { return (String)_threadSubject.get(index); } + public long getRootAuthor(int index) { return ((Long)_threadRootAuthorId.get(index)).longValue(); } + public long getMostRecentAuthor(int index) { return ((Long)_threadLatestAuthorId.get(index)).longValue(); } + public long getMostRecentDate(int index) { return ((Long)_threadLatestPostDate.get(index)).longValue(); } + + private class Harvester implements ReferenceNode.Visitor { + private int _messages; + private ReferenceNode _latest; + private List _tags; + public Harvester() { + _tags = new ArrayList(); + _messages = 0; + } + public int getMessageCount() { return _messages; } + public ReferenceNode getLatestPost() { return _latest; } + public List getTags() { return _tags; } + public void visit(ReferenceNode node, int depth, int siblingOrder) { + _messages++; + if ( (_latest == null) || (_latest.getURI().getMessageId().longValue() < node.getURI().getMessageId().longValue()) ) + _latest = node; + long chanId = _client.getChannelId(node.getURI().getScope()); + MessageInfo msg = _client.getMessage(chanId, node.getURI().getMessageId()); + _tags.addAll(msg.getPublicTags()); + _tags.addAll(msg.getPrivateTags()); + } + } + + private void loadInfo(ReferenceNode threadRoot, Set tagsRequired, Set tagsRejected) { + // walk the thread to find the latest post / message count / tags + Harvester visitor = new Harvester(); + List roots = new ArrayList(); + roots.add(threadRoot); + ReferenceNode.walk(roots, visitor); + + long rootAuthorId = _client.getChannelId(threadRoot.getURI().getScope()); + int messageCount = visitor.getMessageCount(); + ReferenceNode latestPost = visitor.getLatestPost(); + long latestPostDate = latestPost.getURI().getMessageId().longValue(); + long latestAuthorId = _client.getChannelId(latestPost.getURI().getScope()); + List tags = visitor.getTags(); + + // now filter + if (tagsRejected != null) { + for (Iterator iter = tagsRejected.iterator(); iter.hasNext(); ) { + String tag = (String)iter.next(); + if (tags.contains(tag)) { + _ui.debugMessage("Rejecting thread tagged with " + tag + ": " + threadRoot.getURI().toString()); + return; + } + } + } + if ( (tagsRequired != null) && (tagsRequired.size() > 0) ) { + for (Iterator iter = tagsRequired.iterator(); iter.hasNext(); ) { + String tag = (String)iter.next(); + if (!tags.contains(tag)) { + _ui.debugMessage("Rejecting thread not tagged with " + tag + ": " + threadRoot.getURI().toString()); + return; + } + } + } + + // passed the filter. add to the accumulator + _rootURIs.add(threadRoot.getURI()); + _threadSubject.add(threadRoot.getDescription()); + _threadLatestAuthorId.add(new Long(latestAuthorId)); + _threadLatestPostDate.add(new Long(latestPostDate)); + _threadMessages.add(new Integer(messageCount)); + _threadRootAuthorId.add(new Long(rootAuthorId)); + _threadTags.add(tags); + } +} diff --git a/src/syndie/db/UI.java b/src/syndie/db/UI.java new file mode 100644 index 0000000..2c51f4c --- /dev/null +++ b/src/syndie/db/UI.java @@ -0,0 +1,42 @@ +package syndie.db; + +import java.util.List; + +/** + * interface that the client engine queries and updates as it executes the + * requested commands + */ +public interface UI { + public Opts readCommand(); + public Opts readCommand(boolean displayPrompt); + public void errorMessage(String msg); + public void errorMessage(String msg, Exception cause); + public void statusMessage(String msg); + public void debugMessage(String msg); + public void debugMessage(String msg, Exception cause); + /** + * the running command completed + * @param status nonnegative for successful status, negative for failure status + * @param location list of contextual locations (String), generic to specific (most generic first) + */ + public void commandComplete(int status, List location); + /** + * toggle between displaying debug messages and not displaying them + * @return new state + */ + public boolean toggleDebug(); + /** + * toggle between paginating the status output and not + * @return new state + */ + public boolean togglePaginate(); + + /** inject the given command to run next, so it will be the next thing out of readCommand() */ + public void insertCommand(String commandline); + + /** + * read the standard input, replacing os-dependent newline characters with \n (0x0A). + * This reads until a sinle line with just "." is put on it (SMTP-style). + */ + public String readStdIn(); +} diff --git a/src/syndie/db/UnreadableEnclosureBody.java b/src/syndie/db/UnreadableEnclosureBody.java new file mode 100644 index 0000000..add4712 --- /dev/null +++ b/src/syndie/db/UnreadableEnclosureBody.java @@ -0,0 +1,14 @@ +package syndie.db; + +import java.io.*; +import net.i2p.I2PAppContext; +import net.i2p.data.*; +import syndie.data.EnclosureBody; + +/** + * + */ +class UnreadableEnclosureBody extends EnclosureBody { + public UnreadableEnclosureBody(I2PAppContext ctx) { super(ctx); } + public String toString() { return "Unreadable enclosureBody"; } +} diff --git a/src/syndie/db/ViewMessage.java b/src/syndie/db/ViewMessage.java new file mode 100644 index 0000000..2724ea1 --- /dev/null +++ b/src/syndie/db/ViewMessage.java @@ -0,0 +1,158 @@ +package syndie.db; + +import java.io.File; +import java.io.FileOutputStream; +import java.io.IOException; +import java.sql.SQLException; +import java.util.*; +import net.i2p.I2PAppContext; +import net.i2p.data.DataHelper; +import net.i2p.data.Hash; +import syndie.data.ChannelInfo; +import syndie.data.MessageInfo; +import syndie.data.ReferenceNode; + +/** + *CLI viewmessage + * --db $url + * --login $login + * --pass $pass + * --internalid $internalMessageId + * --out $outputDirectory + */ +public class ViewMessage extends CommandImpl { + ViewMessage() {} + public DBClient runCommand(Opts args, UI ui, DBClient client) { + if ( (client == null) || (!client.isLoggedIn()) ) { + List missing = args.requireOpts(new String[] { "db", "login", "pass", "internalid", "out" }); + if (missing.size() > 0) { + ui.errorMessage("Invalid options, missing " + missing); + ui.commandComplete(-1, null); + return client; + } + } else { + List missing = args.requireOpts(new String[] { "internalid", "out" }); + if (missing.size() > 0) { + ui.errorMessage("Invalid options, missing " + missing); + ui.commandComplete(-1, null); + return client; + } + } + + try { + long nymId = -1; + if (args.dbOptsSpecified()) { + if (client == null) + client = new DBClient(I2PAppContext.getGlobalContext(), new File(TextEngine.getRootPath())); + else + client.close(); + nymId = client.connect(args.getOptValue("db"), args.getOptValue("login"), args.getOptValue("pass")); + if (nymId < 0) { + ui.errorMessage("Login incorrect"); + ui.commandComplete(-1, null); + return client; + } + } else { + nymId = client.getLoggedInNymId(); + if (nymId < 0) { + ui.errorMessage("Not logged in"); + ui.commandComplete(-1, null); + return client; + } + } + long id = args.getOptLong("internalid", -1); + if (id < 0) { + ui.errorMessage("Message ID is invalid"); + ui.commandComplete(-1, null); + } else { + MessageInfo info = client.getMessage(id); + if (info == null) { + ui.errorMessage("Message ID is not known"); + ui.commandComplete(-1, null); + } else { + extractMessage(client, ui, info, args.getOptValue("out")); + } + } + } catch (SQLException se) { + ui.errorMessage("Invalid database URL", se); + ui.commandComplete(-1, null); + //} finally { + // if (client != null) client.close(); + } + return client; + } + + private void extractMessage(DBClient client, UI ui, MessageInfo info, String outDir) { + try { + File dir = new File(outDir); + if (dir.exists()) { + ui.errorMessage("Output directory already exists. Aborting"); + ui.commandComplete(-1, null); + return; + } + dir.mkdirs(); + + File statusFile = new File(outDir, "status.txt"); + FileOutputStream fos = new FileOutputStream(statusFile); + fos.write(DataHelper.getUTF8(info.toString())); + fos.close(); + + // now extract the pages and attachments + for (int i = 0; i < info.getPageCount(); i++) { + String data = client.getMessagePageData(info.getInternalId(), i); + if (data != null) { + fos = new FileOutputStream(new File(dir, "page" + i + ".dat")); + fos.write(DataHelper.getUTF8(data)); + fos.close(); + } + + String cfg = client.getMessagePageConfig(info.getInternalId(), i); + if (cfg != null) { + fos = new FileOutputStream(new File(dir, "page" + i + ".cfg")); + fos.write(DataHelper.getUTF8(cfg)); + fos.close(); + } + } + for (int i = 0; i < info.getAttachmentCount(); i++) { + byte data[] = client.getMessageAttachmentData(info.getInternalId(), i); + if (data != null) { + fos = new FileOutputStream(new File(dir, "attachment" + i + ".dat")); + fos.write(data); + fos.close(); + } + + String cfg = client.getMessageAttachmentConfig(info.getInternalId(), i); + if (cfg != null) { + fos = new FileOutputStream(new File(dir, "attachment" + i + ".cfg")); + fos.write(DataHelper.getUTF8(cfg)); + fos.close(); + } + } + + List refs = info.getReferences(); + if (refs.size() > 0) { + String refStr = ReferenceNode.walk(refs); + fos = new FileOutputStream(new File(dir, "references.cfg")); + fos.write(DataHelper.getUTF8(refStr)); + fos.close(); + } + + ui.statusMessage("Message extracted to " + dir.getAbsolutePath()); + ui.commandComplete(0, null); + } catch (IOException ioe) { + ui.errorMessage("Error viewing", ioe); + ui.commandComplete(-1, null); + } + } + + public static void main(String args[]) { + try { + CLI.main(new String[] { "viewmessage", + "--db", "jdbc:hsqldb:file:/tmp/cli", + "--login", "j", + "--pass", "j", + "--internalid", "0", + "--out", "/tmp/msgOut" }); + } catch (Exception e) { e.printStackTrace(); } + } +} diff --git a/src/syndie/db/ViewMetadata.java b/src/syndie/db/ViewMetadata.java new file mode 100644 index 0000000..20732cf --- /dev/null +++ b/src/syndie/db/ViewMetadata.java @@ -0,0 +1,90 @@ +package syndie.db; + +import java.io.File; +import java.sql.SQLException; +import java.util.*; +import net.i2p.I2PAppContext; +import net.i2p.data.Hash; +import syndie.data.ChannelInfo; + +/** + *CLI viewmetadata + * --db $url + * --login $login + * --pass $pass + * --channel $base64(channelHash) + */ +public class ViewMetadata extends CommandImpl { + ViewMetadata() {} + public DBClient runCommand(Opts args, UI ui, DBClient client) { + if ( (client == null) || (!client.isLoggedIn()) ) { + List missing = args.requireOpts(new String[] { "db", "login", "pass", "channel" }); + if (missing.size() > 0) { + ui.errorMessage("Invalid options, missing " + missing); + ui.commandComplete(-1, null); + return client; + } + } else { + List missing = args.requireOpts(new String[] { "channel" }); + if (missing.size() > 0) { + ui.errorMessage("Invalid options, missing " + missing); + ui.commandComplete(-1, null); + return client; + } + } + + try { + long nymId = -1; + if (args.dbOptsSpecified()) { + if (client == null) + client = new DBClient(I2PAppContext.getGlobalContext(), new File(TextEngine.getRootPath())); + else + client.close(); + nymId = client.connect(args.getOptValue("db"), args.getOptValue("login"), args.getOptValue("pass")); + if (nymId < 0) { + ui.errorMessage("Login incorrect"); + ui.commandComplete(-1, null); + return client; + } + } else { + nymId = client.getLoggedInNymId(); + if (nymId < 0) { + ui.errorMessage("Not logged in"); + ui.commandComplete(-1, null); + return client; + } + } + Hash channel = new Hash(args.getOptBytes("channel")); + long channelId = client.getChannelId(channel); + if (channelId < 0) { + ui.errorMessage("Channel is not known"); + ui.commandComplete(-1, null); + } else { + ChannelInfo info = client.getChannel(channelId); + if (info != null) { + ui.statusMessage(info.toString()); + ui.commandComplete(0, null); + } else { + ui.errorMessage("Error fetching channel " + channelId); + ui.commandComplete(-1, null); + } + } + } catch (SQLException se) { + ui.errorMessage("Invalid database URL", se); + ui.commandComplete(-1, null); + //} finally { + // if (client != null) client.close(); + } + return client; + } + + public static void main(String args[]) { + try { + CLI.main(new String[] { "viewmetadata", + "--db", "jdbc:hsqldb:file:/tmp/cli", + "--login", "j", + "--pass", "j", + "--channel", "2klF2vDob7M82j8ZygZ-s9LmOHfaAdso5V0DzLvHISI=" }); + } catch (Exception e) { e.printStackTrace(); } + } +} diff --git a/src/syndie/db/ddl.txt b/src/syndie/db/ddl.txt new file mode 100644 index 0000000..6cbfca6 --- /dev/null +++ b/src/syndie/db/ddl.txt @@ -0,0 +1,337 @@ +CREATE CACHED TABLE appVersion ( + app VARCHAR(64) PRIMARY KEY + , versionNum INTEGER NOT NULL + , visibleVersion VARCHAR(64) +); +INSERT INTO appVersion (app, versionNum, visibleVersion) VALUES ('syndie.db', 1, 'Initial version'); + +-- unique IDs for the channel table, but for transactional and threading +-- issues, we need to pull the ID first, then insert +CREATE SEQUENCE channelIdSequence; + +CREATE CACHED TABLE channel ( + -- locally unique id + channelId BIGINT IDENTITY PRIMARY KEY + , channelHash VARBINARY(32) + , identKey VARBINARY(256) + , encryptKey VARBINARY(256) + , edition BIGINT + , name VARCHAR(256) + , description VARCHAR(1024) + -- can unauthorized people post new topics? + , allowPubPost BOOLEAN + -- can unauthorized people reply to existing topics? + , allowPubReply BOOLEAN + , expiration DATE DEFAULT NULL + , importDate DATE DEFAULT NULL + , UNIQUE (channelHash) +); + +CREATE CACHED TABLE channelTag ( + channelId BIGINT + , tag VARCHAR(64) + , wasEncrypted BOOLEAN + , PRIMARY KEY (channelId, tag) +); + +-- who can post to the channel +CREATE CACHED TABLE channelPostKey ( + channelId BIGINT + , authPubKey VARBINARY(256) + , PRIMARY KEY (channelId, authPubKey) +); + +-- who can manage the channel (post metadata messages) +CREATE CACHED TABLE channelManageKey ( + channelId BIGINT + , authPubKey VARBINARY(256) + , PRIMARY KEY (channelId, authPubKey) +); + +CREATE CACHED TABLE channelArchive ( + channelId BIGINT + , archiveId BIGINT + , wasEncrypted BOOLEAN + , PRIMARY KEY (channelId, archiveId) +); + +-- read keys published in the encrypted part of a channel's metadata +CREATE CACHED TABLE channelReadKey ( + channelId BIGINT + , keyStart DATE DEFAULT NULL + , keyEnd DATE DEFAULT NULL + , keyData VARBINARY(32) + -- if true, the encrypted metadata containing this read key was visible due to an unencrypted + -- bodyKey in the public headers + , wasPublic BOOLEAN DEFAULT FALSE +); + +CREATE CACHED TABLE channelMetaHeader ( + channelId BIGINT + , headerName VARCHAR(256) + , headerValue VARCHAR(4096) + , wasEncrypted BOOLEAN +); + +CREATE CACHED TABLE channelReferenceGroup ( + channelId BIGINT + , groupId INTEGER NOT NULL + , parentGroupId INTEGER + , siblingOrder INTEGER NOT NULL + , name VARCHAR(256) + , description VARCHAR(1024) + , uriId BIGINT + -- allows for references of 'ban', 'recommend', 'trust', etc + , referenceType INTEGER DEFAULT NULL + , wasEncrypted BOOLEAN + , PRIMARY KEY (channelId, groupId) +); + +CREATE CACHED TABLE channelAvatar ( + channelId BIGINT PRIMARY KEY + , avatarData LONGVARBINARY +); + +-- unique IDs for the uriAttribute table, but for transactional and threading +-- issues, we need to pull the ID first, then insert +CREATE SEQUENCE uriIdSequence; + +-- simple URIs are just attribKey="url" attribValString="http://www.i2p.net/", +-- but other internal URI references are a bit more complicated, with pairs +-- like "network"="syndie", "type"="channel", "messageId=10199911184", etc. +-- some of the key=val pairs are descriptive of the URI, and not a unique +-- part of the URI itself, such as "title"="this is my blog". the canonical +-- uri takes these and orders them alphabetically (UTF8, UK Locale), ignoring +-- any descriptive fields +CREATE CACHED TABLE uriAttribute ( + uriId BIGINT + -- "url", "network", "channel", "messageId", "description", "title" + , attribKey VARCHAR(64) + -- exactly one of attribVal* must be non-null + , attribValString VARCHAR(2048) DEFAULT NULL + , attribValLong BIGINT DEFAULT NULL + , attribValBool BOOLEAN DEFAULT NULL + -- newline (0x0A) delimited strings + , attribValStrings VARCHAR(2048) DEFAULT NULL + -- if true, this key=val is not part of the URI's unique string, + -- but instead just serves to describe the uri + , isDescriptive BOOLEAN + , PRIMARY KEY (uriId, attribKey) +); + + +-- unique IDs for the archive table, but for transactional, threading, and portability +-- issues, we need to pull the ID first, then insert +CREATE SEQUENCE archiveIdSequence; + +CREATE CACHED TABLE archive ( + archiveId BIGINT PRIMARY KEY + -- are we allowed to post (with the auth we have)? + , postAllowed BOOLEAN + -- are we allowed to pull messages (with the auth we have)? + , readAllowed BOOLEAN + -- index into uris.uriId to access the archive + , uriId BIGINT +); + + +-- unique IDs for the nym table, but for transactional, threading, and portability +-- issues, we need to pull the ID first, then insert +CREATE SEQUENCE nymIdSequence; + +CREATE CACHED TABLE nym ( + nymId INTEGER PRIMARY KEY + , login VARCHAR(128) NOT NULL + , publicName VARCHAR(128) DEFAULT NULL + -- if the passSalt is set, the passHash is the SHA256(password + salt) + , passSalt VARBINARY(16) DEFAULT NULL + , passHash VARBINARY(32) DEFAULT NULL + , isDefaultUser BOOLEAN + , UNIQUE (login) +); + +-- nyms may have various keys to perform certain tasks within different +-- channels +CREATE CACHED TABLE nymKey ( + nymId INTEGER + , keyChannel VARBINARY(32) + -- manage, reply, post, read + , keyFunction VARCHAR(32) + -- aes256, dsa, elg2048, etc + , keyType VARCHAR(32) + , keyData VARBINARY(512) + -- if the keySalt is set, the keyData is actually AES256/CBC + -- encrypted, using SHA256(password + salt[0:15])) as the the AES256 + -- key, and salt[16:31] as the IV + , keySalt VARBINARY(32) + -- the keys known by a nym may be received from untrusted or unauthenticated + -- sources - at first, they should not override other known keys, but if they + -- are later authenticated (able to decrypt/verify some authenticated posts, + -- etc), they should be marked as such here. + , authenticated BOOLEAN DEFAULT FALSE + , keyPeriodBegin DATE DEFAULT NULL + , keyPeriodEnd DATE DEFAULT NULL +); + + +-- unique IDs for the resourceGroup.groupId column, but for transactional and threading +-- issues, we need to pull the ID first, then insert +CREATE SEQUENCE resourceGroupIdSequence; + +-- organize the nym's resource tree (bookmarks, etc) +CREATE CACHED TABLE resourceGroup ( + nymId INTEGER NOT NULL + , groupId INTEGER NOT NULL + , parentGroupId INTEGER NOT NULL + , siblingOrder INTEGER NOT NULL + , name VARCHAR(128) + , description VARCHAR(512) + , uriId BIGINT + , isIgnored BOOLEAN + , isBanned BOOLEAN + , loadOnStartup BOOLEAN + , PRIMARY KEY (nymId, groupId) + , UNIQUE (nymId, parentGroupId, siblingOrder) +); + +-- unique message id +CREATE SEQUENCE msgIdSequence; + +-- actual messages +CREATE CACHED TABLE channelMessage ( + -- unique Id internal to the database + msgId BIGINT PRIMARY KEY + -- what channel's keys are used to authorize and read the + -- message, and what namespace the messageId is unique within + , scopeChannelId BIGINT + , messageId BIGINT + -- what channel the post should be grouped into + , targetChannelId BIGINT + -- who made the post. may be null if unknown, but is almost always + -- the same as the scopeChannelId + , authorChannelId BIGINT + , subject VARCHAR(256) + , overwriteScopeHash VARBINARY(32) + , overwriteMessageId BIGINT + , forceNewThread BOOLEAN + , refuseReplies BOOLEAN + , wasEncrypted BOOLEAN + -- was the post encrypted with passphrase based encryption + , wasPBE BOOLEAN + , wasPrivate BOOLEAN + -- authorized is set to true if the post was signed by a + -- key listed as a poster or manager to the channel, if + -- the channel allowed unauthorized posts, or if the channel + -- allowed unauthorized replies and the post is in reply to an + -- authorized post (either directly or indirectly) + , wasAuthorized BOOLEAN + , wasAuthenticated BOOLEAN + , isCancelled BOOLEAN + , expiration DATE + , importDate DATE DEFAULT NULL + , UNIQUE (scopeChannelId, messageId) + -- authorChannelHash, targetChannelId, messageId) +); + +CREATE CACHED TABLE messageHierarchy ( + msgId BIGINT + -- refers to a targetChannelId + , referencedChannelHash VARBINARY(32) + , referencedMessageId BIGINT + -- how far up the tree is the referenced message? parent has a closeness of 1, + -- grandparent has a closeness of 2, etc. does not necessarily have to be exact, + -- merely relative + , referencedCloseness INTEGER DEFAULT 1 + , PRIMARY KEY (msgId, referencedCloseness) +); + +CREATE CACHED TABLE messageTag ( + msgId BIGINT + , tag VARCHAR(64) + , isPublic BOOLEAN DEFAULT false + , PRIMARY KEY (msgId, tag) +); + +-- organize the message's references (not including html/sml/etc links, just those in the +-- references.cfg zip entry) +CREATE CACHED TABLE messageReference ( + msgId BIGINT NOT NULL + -- referenceId is unique within the msgId scope + , referenceId INTEGER NOT NULL + , parentReferenceId INTEGER NOT NULL + , siblingOrder INTEGER NOT NULL + , name VARCHAR(128) + , description VARCHAR(512) + , uriId BIGINT + , refType VARCHAR(64) + , PRIMARY KEY (msgId, referenceId) + , UNIQUE (msgId, parentReferenceId, siblingOrder) +); + +CREATE CACHED TABLE messageAttachment ( + msgId BIGINT + -- filename is derived from this + , attachmentNum INTEGER + -- == sizeof(messageAttachmentData.dataBinary) + , attachmentSize BIGINT + -- suggested mime type + , contentType VARCHAR(64) + -- suggested name + , name VARCHAR(64) + -- suggested description + , description VARCHAR(256) + , PRIMARY KEY (msgId, attachmentNum) +); + +-- holds the actual data of a particular attachment +CREATE CACHED TABLE messageAttachmentData ( + msgId BIGINT + , attachmentNum INTEGER + , dataBinary LONGVARBINARY + , PRIMARY KEY (msgId, attachmentNum) +); + +-- holds the config for a particular attachment (unencrypted) +CREATE CACHED TABLE messageAttachmentConfig ( + msgId BIGINT + , attachmentNum INTEGER + , dataString LONGVARCHAR + , PRIMARY KEY (msgId, attachmentNum) +); + +CREATE CACHED TABLE messagePage ( + msgId BIGINT + -- 0 indexed + , pageNum INTEGER + -- mime type + , contentType VARCHAR(64) + , PRIMARY KEY (msgId, pageNum) +); + +-- holds the raw data for the page (in UTF-8) +CREATE CACHED TABLE messagePageData ( + msgId BIGINT + , pageNum INTEGER + , dataString LONGVARCHAR + , PRIMARY KEY (msgId, pageNum) +); + +-- holds the config for a particular page +CREATE CACHED TABLE messagePageConfig ( + msgId BIGINT + , pageNum INTEGER + , dataString LONGVARCHAR + , PRIMARY KEY (msgId, pageNum) +); + +CREATE CACHED TABLE messageAvatar ( + msgId BIGINT PRIMARY KEY + , avatarData LONGVARBINARY +); + +-- never import posts from this author or in this channel +CREATE CACHED TABLE banned ( + channelHash VARBINARY(32) PRIMARY KEY + , bannedOn DATE DEFAULT NULL + , cause VARCHAR(256) +); diff --git a/src/syndie/db/ddl_update1.txt b/src/syndie/db/ddl_update1.txt new file mode 100644 index 0000000..145a4d5 --- /dev/null +++ b/src/syndie/db/ddl_update1.txt @@ -0,0 +1,13 @@ +-- update the database schema from version 1 +-- this update is so that the 'prefs' command can keep persistent +-- preferences, loading them on login, etc. +-- + +UPDATE appVersion SET versionNum = 2, visibleVersion = 'DB With NymPrefs'; + +CREATE CACHED TABLE nymPref ( + nymId INTEGER + , prefName VARCHAR(128) + , prefValue VARCHAR(256) + , PRIMARY KEY (nymId, prefName) +); diff --git a/src/syndie/db/ddl_update2.txt b/src/syndie/db/ddl_update2.txt new file mode 100644 index 0000000..6d0f722 --- /dev/null +++ b/src/syndie/db/ddl_update2.txt @@ -0,0 +1,25 @@ +-- update the database from schema version 2 +-- this version 3 allows us to reference undecrypted data in the +-- database, which means the undecrypted messages can be included +-- in archives, etc. +-- + +UPDATE appVersion SET versionNum = 3, visibleVersion = 'DB With still-encrypted data'; + +-- true if the message is a normal post but we don't have the decryption key +-- to read it +ALTER TABLE channel ADD COLUMN readKeyMissing BOOLEAN DEFAULT FALSE; +-- contains the prompt to decrypt the metadata if and only if the metadata +-- could otherwise not be decrypted +ALTER TABLE channel ADD COLUMN pbePrompt VARCHAR(256) DEFAULT NULL; + +-- true if the message is a normal post but we don't have the decryption key +-- to read it +ALTER TABLE channelMessage ADD COLUMN readKeyMissing BOOLEAN DEFAULT FALSE; +-- true if the message is a private reply message and we don't have the +-- decryption key to read it +ALTER TABLE channelMessage ADD COLUMN replyKeyMissing BOOLEAN DEFAULT FALSE; +-- contains the prompt to decrypt the body if and only if the body could +-- otherwise not be decrypted +ALTER TABLE channelMessage ADD COLUMN pbePrompt VARCHAR(256) DEFAULT NULL; + diff --git a/src/syndie/db/ddl_update3.txt b/src/syndie/db/ddl_update3.txt new file mode 100644 index 0000000..923accb --- /dev/null +++ b/src/syndie/db/ddl_update3.txt @@ -0,0 +1,13 @@ +-- update the database from schema version 3 +-- this version 4 creates a set of per-nym command aliases +-- + +UPDATE appVersion SET versionNum = 4, visibleVersion = 'DB With aliases'; + +CREATE CACHED TABLE nymCommandAlias ( + nymId INTEGER + , aliasName VARCHAR(64) + , aliasValue VARCHAR(1024) + , PRIMARY KEY(nymId, aliasName) +); +