This is version . It is not the current version, and thus it cannot be edited.
[Back to current version]   [Restore this version]

Here's an idea: Let's define an XML-RPC or SOAP interface to Wiki. I don't exactly know what we could do with it, but at least we could do things like:

  • Automatical notification of page changes (someone would need to write a script that would check the RecentChanges, then email anyone.
  • Combining Wikis in a manner more efficient than InterWiki.

This would save us from actually requiring to implement all sorts of features into JSPWiki itself, and allow people to make their own modular thingies.

That's a very interesting thought. It allows any outside process to monitor and perform actions based on the current state of the Wiki, but without actually affecting the Wiki.

As specified above, the only way to get a list of pages is getRecentChanges(). It seems a bit limiting to think that all the other process would care about is recently changed files. What about:

  • getAllPages(): Get list of all Wiki pages.
  • getMatchingPages(String query): Get list of all Wiki pages that match the query. Of course, this then begs the question of what matching means. That's kind of annoying.

getMatchingPages() may be overkill. Perhaps these queries would be better implemented with a SOAPProvider, a WikiPageProvider based on SOAP. Then the matching SOAP service can store the files anyway it likes, be notified the instant that a page is stored, and query the page text any silly way it feels like. A SOAPProvider would also get around the limitation that the API specified above requires the interested party to "poll" the WikiEngine periodically. With a SOAPProvider, if you care about every save or read, you can track them yourself.


Two problems:

  1. Date handling.
  2. UTF-8.

First, dates: The XML-RPC spec has no way to specify the timezone. Which means that you actually would have to know the timezone in which the server resides... And which also means that the XML-RPC client can use whatever timezone it likes to interpret the incoming message. And that ain't fun. We could of course

  • a) Just send an int meaning seconds of UTC since EPOCH,
  • b) Send a long disguised as a String meaning milliseconds of UTC since EPOCH,
  • c) Send a ISO format String as the date, or
  • d) Try to figure out a way to talk UTC under XML-RPC.

Second, sending UTF-8. XML-RPC spec says "ASCII". Which sucks, again. Sending ISO-8859-1 seems to work, but is there any guarantee that the recipient can talk UTF-8? And how would he know the incoming character set? Or would that be the responsibility of the XML parser?

At the moment the Apache XML-RPC library seems to use Latin1 ONLY. The euro sign (€) gets killed on the way, as are all UTF-8 characters.


More updates. The UTF-8 issue seems to be talked to the death on the XMLRPC mailing list. The summary seems to be: "While many toolkits might support something else than ASCII in string values, the XML-RPC spec is frozen, and will never change. If you transport something else than ASCII, you're in violation of the spec. Use base64."

Using base64 would mean that all methods that use strings now, should use base64 (because JSPWiki supports UTF-8 all across the board - in fact even ISO-Latin1 is not supposed to go through XML-RPC strings). Which means more work to the application writer, since he has to encode/decode all stuff going back and forth. Gng. XML-RPC is not person-to-person interoperable - many people are unable to write their own names as strings.

I'm seriously considering SOAP at this point. Or breaking the XML-RPC spec knowingly and willingly; call it WikiRPC or something =). (XML-RPC is a registered trademark of Userland Software).

Date issue is easier, but java.util.Calendar and the XML-RPC library does not make it exactly simple to assume UTC. Need some shifting back and forth.


Here is now the API as of v1.6.11 (the command prefix being jspwiki.):

  • getRecentChanges( Date timestamp ): Get list of changed pages since timestamp, which should be in UTC. The result is an array, where each element is a struct:
  • getRPCVersionSupported(): Returns 1 with this version of the JSPWiki API.
    • name (string) : Name of the page. The name is UTF-8 with URL encoding to make it ASCII.
    • lastModified (date) : Date of last modification, in UTC.
    • author (string) : Name of the author (if available). Again, name is UTF-8 with URL encoding.
    • version (int) : Current version.
  • base64 getPage( String pagename ): Get the raw Wiki text of page, latest version. Page name must be UTF-8, with URL encoding. Returned value is a binary object, with UTF-8 encoded page data.
  • base64 getPage( String pagename, int version ): Get the raw Wiki text of page. Returns UTF-8, expects UTF-8 with URL encoding.
  • base64 getPageHTML( String pagename ): Return page in rendered HTML. Returns UTF-8, expects UTF-8 with URL encoding.
  • base64 getPageHTML( String pagename, int version ): Return page in rendered HTML, UTF-8.
  • array getAllPages(): Returns a list of all pages. The result is an array of strings, again UTF-8 in URL encoding.
  • struct getPageInfo( string pagename ) : returns a struct with elements
    • name (string): the canonical page name, URL-encoded UTF-8.
    • lastModified (date): Last modification date, UTC.
    • version (int): current version
    • author (string): author name, URL-encoded UTF-8.
  • struct getPageInfo( string pagename, int version ) : returns a struct just like plain getPageInfo(), but this time for a specific version.

As you can see, all data is returned in a base64 -type in UTF-8 encoding, regardless of what JSPWiki preference actually is. Also, all incoming or outcoming strings are really UTF-8, but they have been URL-encoded so that the XML-RPC requirement of ASCII is fulfilled.

Let me know if this works...


MahlenMorris It seems to be working fine. I'm turning the base64 back to a String by calling new String((byte) server.execute(GETPAGE, args), "UTF-8"); does that seem right? Like for most ignorant Americans, I18N and character encodings are very mysterious to me :)

I'm still having trouble getting the time zone right, though. I've been looking at what you did in your code, but no matter what I do i get times that think they are PST but are in fact EET, For example, as i write this it thinks that the TODOList was last changed at 00:01 PST, when it was really 00:01 EET. If you were really sending UTC, i don't think I'd be getting that. Should that be working yet?

JanneJalkanen: Yeah, that's the correct way to get UTF-8. It's entirely possible that I screwed up something in the TimeZone thing... I didn't really test it properly. BTW, note that XML-RPC does not transport TimeZone information at all, and the Apache XML-RPC library always assumes your default TimeZone when it's reading the timestamp. You'll have to manipulate the result with the Calendar class to make sure it's UTC.

ErnoOxman: It would be nice to have something like setPage( String pagename, base64 text) for the next version of API.

Expect to see J2ME client soon...

JanneJalkanen: Yes, it's probably a good idea to have one as well. I wanted to make sure the getting of pages work before allowing anyone to write a four-line script effectively deleting all pages :-).

A secondary note: Should the putPage API also include a username-password combination?

ErnoOxman: I guess authentication could be done with HTTP Basic Authentication header, which is supported at least in Apache XML-RPC package. If it doesn't feel suitable for some reason, username-password combination would be ok, too.

Funny, I was just thinking of implementing something like this for UseMod and / or TWiki. Seems like it wouldn't take too much to do, and it'd be nice to jump on a common interface bandwagon. Let's see... how quick can I throw it together...

(Oh yeah, and I wandered in off the street from

Something else I was thinking of for my own Wiki RPC API, which might sound strange, was a wiki filter method. That is, accept text, process text for formatting and WikiWords, return content with links and formatting applied (sans wiki header/footer).

My first purpose in mind was to use this as a way to join a weblog and a wiki, where the weblog entries link to the wiki and can be written in the wiki's style.

-- LesOrchard

I'm scratching my head a bit on that API extension. I too am growing very fond of the idea of using Wiki TextFormattingRules to edit my web pages (I've even noticed myself accidently using Wiki style in Word documents). But if you want to look at Wiki pages without the header/footer/left menu, why not just edit the .JSP page that doesn't include those items? Aren't those other elements part of the page design? I'm not sure how Twiki is designed, so admittedly this comment may not be all that germane. Am I misunderstanding your desired effect? Using Wiki to create pages that are not editable by the masses?

This problem does point to one limitation of the current structure of the code. TranslatorReader thinks that the only .JSP that you'd want to view pages with is "wiki.jsp". But if I wanted to have two different views of the data, one that I use to view and potentially edit pages, and another that just shows the pages without suggesting the ability to edit them, there's no good method for that. I think this implies that there's too close a tie between the Model and the View. Maybe if TranslatorReader could take arguments saying what page is for viewing pages.

By the way, I know that the above situation could be partially implemented by using permissions (give me read/write permissions, but no one else). But the pages would still have the "Edit this page" links on them; they just wouldn't work for anyone else. A cleaner way to do that would be interesting.

But then, this latter seems counter to the Wiki Way. Maybe the real solution is to make a Weblog that had wiki-style editing!

(The above is Mahlen rambling on a topic; the above paragraphs are not believed to form a coherent idea) -- MahlenMorris

I agree that the current TranslatorReader is not as independent as it should be. Partly this is because I wanted to avoid the complications of making a completely generic Wiki translator - I figured nobody else would be interested in using the same kind of translator =).

You could do two things, though:

As for a Wiki&Weblog synthesis, PikiePikie is a sort of combination, I believe. I just couldn't really make heads or tails with it =).

Also, you can get the Wiki HTML by using the getPageHTML() methods of the XML-RPC API. That way you get it without the headers/footers.


Oh yeah - and making a common Wiki interface is a cool thing, I agree. Perhaps we should define a standard "wiki." -prefix for all commands, so that you could use "twiki." or "jspwiki." or whatever for app-specific thingies? :-) --JanneJalkanen

Yup, it looks like PikiePikie has something quite similar to what I'm thinking of: A weblog whose entries lead into the wiki itself. In the case of PikiePikie, the weblog is a trick of the wiki itself. What I'm thinking of is where something like BlogApp is used to post a weblog entry to a MovableType weblog, and via some filter (say, a BloggerAPIProxy) which calls on the WikiRPCInterface, that weblog entry is imbued with links to the wiki on the site before it reaches MovableType. So, a site would have a weblog for timely news and updates and a wiki for more long-term idea development. Sure, the wiki's RecentChanges could serve as a source of news and updates, but a weblog is a more explicit tool for that.

(OH, look, I found a discussion of this sort of thing at Wiki:WikiLog. /me wanders over there.)


Excellent, thanks Les! I was pondering about writing an RSS feed for JSPWiki, and now I've got the spec, too. I think it's on target for 1.8.0.


I'm very happy with PikiePikie. It produces RssFeeds for RecentChanges and each weblog. My fondest wish would be if i could have a WeblogEntry actually be a regular wiki page. Anyway, i track some of these things at my (PikiePikie) wiki-weblog AbbeNormal (, and on my Wiki Weblog PIM page.

Do you know about the existing wiki extensions to RSS? See Meatball:RssExtensionModuleForWikis. Also, OpenWiki both emits and embeds RssFeeds.

I'm assuming you've already looked at the links on Meatball:WikiInterchangeFormat.

And thanks for your work on all this! I think there are some great possibilities that we can't even imagine if we get wikis exchanging stuff with each other and other software. The translation aspect between different wiki markup is difficult, but useful results are possible.


Yes, I know of the RSS Wiki standard. Tracking it is covered in RSSFeedForJSPWiki.

I'm sort of dreaming about a RecentChangesPlugin that could download its contents from any RSS feed from any Wiki or Weblog. Something like:

[{INSERT RSSPlugin WHERE source=, since=2d}]

That way I could have a single page with the most interesting changes :-).


Based on the work you have done here I've added experimental XML-RPC and SOAP support for the same methods as you use. You can find the methods (with some limited autogenerated documentation, expect better docs tomorrow) here:

One thing that is very different with my methods is that I have decided to break to ANSI rule of XML-RPC and returns the data as UTF8 anyways. if anyone has a huge problem with that they can just use the SOAP method instead ;-)

Feedback is appriciated! Thanks for this very interesting work! I will follow it and probably evolve it a little bit myself :-)


Whee, this is definitely cool :-). I deliberately wanted to stay compatible with XML-RPC spec because, well, it makes sense to be compatible. Not to mention that the Java XML-RPC library didn't take UTF-8 too well anyway. Also, you'll need to convert the page data anyway, since it's possible to use < and > inside the text, which makes it necessary to turn them into HTML entities. So it doesn't really matter much whether you do the whole UTF-8 into base64 or UTF-8 into escaped UTF-8.

(I cleaned some older stuff away, BTW...)


I had three reasons for not using the base64 approach. 1. I think the ASCII rule in XMLRPC is a huge bug. And Dave Winer does as well ( :-) 2. My main platform is JavaScript... and it can not handle base64 really good... 3. If anyone reallt opposes to it I can just point then to the SOAP implementation ;-)

Do you have any ideas for other methods that we should implement? :-) I we been thinking about making a setPage() method for writing content...


On a secondary note - can you be sure that the newlines on the Wiki page (which tend to be very meaningful) always go through the XML transformation properly? I am not really certain about that myself, but I've found it best not to make assumptions. :-)

The whole XML-RPC is a bug. Darned infectious at that, I'd say =).

Note that you can, of course, break the XML-RPC standard. You just can't call it XML-RPC anymore, since UserLand software owns the trademark.

I think the proper call for setPage() is something like:

  • setPage( string pageName, base64 text ): Sets the page text. Now, what should it return? The old page text? An error code? An error message?

I think we can do user authentication in

  • a separate call (setPage( string username, string password, string pageName, base64 text), or
  • using HTTP Basic authentication, or
  • allow both.


Add new attachment

Only authorized users are allowed to upload new attachments.
« This particular version was published on 19-Feb-2002 16:51 by