This is version . It is not the current version, and thus it cannot be edited.
[Back to current version]   [Restore this version]

Here's an idea: Let's define an XML-RPC or SOAP interface to Wiki. I don't exactly know what we could do with it, but at least we could do things like:

  • Automatical notification of page changes (someone would need to write a script that would check the RecentChanges, then email anyone.
  • Combining Wikis in a manner more efficient than InterWiki.

This would save us from actually requiring to implement all sorts of features into JSPWiki itself, and allow people to make their own modular thingies.


Here is now the API as of v1.6.12 (the command prefix being wiki.):

  • array getRecentChanges( Date timestamp ): Get list of changed pages since timestamp, which should be in UTC. The result is an array, where each element is a struct:
    • name (string) : Name of the page. The name is UTF-8 with URL encoding to make it ASCII.
    • lastModified (date) : Date of last modification, in UTC.
    • author (string) : Name of the author (if available). Again, name is UTF-8 with URL encoding.
    • version (int) : Current version.
  • int getRPCVersionSupported(): Returns 1 with this version of the JSPWiki API.
  • base64 getPage( String pagename ): Get the raw Wiki text of page, latest version. Page name must be UTF-8, with URL encoding. Returned value is a binary object, with UTF-8 encoded page data.
  • base64 getPageVersion( String pagename, int version ): Get the raw Wiki text of page. Returns UTF-8, expects UTF-8 with URL encoding.
  • base64 getPageHTML( String pagename ): Return page in rendered HTML. Returns UTF-8, expects UTF-8 with URL encoding.
  • base64 getPageHTMLVersion( String pagename, int version ): Return page in rendered HTML, UTF-8.
  • array getAllPages(): Returns a list of all pages. The result is an array of strings, again UTF-8 in URL encoding.
  • struct getPageInfo( string pagename ) : returns a struct with elements
    • name (string): the canonical page name, URL-encoded UTF-8.
    • lastModified (date): Last modification date, UTC.
    • author (string): author name, URL-encoded UTF-8.
    • version (int): current version
  • struct getPageInfoVersion( string pagename, int version ) : returns a struct just like plain getPageInfo(), but this time for a specific version.
  • array listLinks( string pagename ): Lists all links for a given page. The returned array contains structs, with the following elements:
    • name (string) : The page name or URL the link is to.
    • type (int) : The link type. Zero (0) for internal Wiki link, one (1) for external link (URL - image link, whatever).
    • I could use some comments on this --Janne
As you can see, all data is returned in a base64 -type in UTF-8 encoding, regardless of what JSPWiki preference actually is. Also, all incoming or outcoming strings are really UTF-8, but they have been URL-encoded so that the XML-RPC requirement of ASCII is fulfilled.

The URL is (note the trailing slash).


All methods which handle a page in any way can return a Fault. Current fault codes are:

  • 1 : No such page was found.


The UTF-8 issue seems to be talked to the death on the XMLRPC mailing list. The summary seems to be: "While many toolkits might support something else than ASCII in string values, the XML-RPC spec is frozen, and will never change. If you transport something else than ASCII, you're in violation of the spec. Use base64."

Using base64 would mean that all methods that use strings now, should use base64 (because JSPWiki supports UTF-8 all across the board - in fact even ISO-Latin1 is not supposed to go through XML-RPC strings). Which means more work to the application writer, since he has to encode/decode all stuff going back and forth. Gng. XML-RPC is not person-to-person interoperable - many people are unable to write their own names as strings.

I'm seriously considering SOAP at this point. Or breaking the XML-RPC spec knowingly and willingly; call it WikiRPC or something =). (XML-RPC is a registered trademark of Userland Software).


MahlenMorris It seems to be working fine. I'm turning the base64 back to a String by calling new String((byte) server.execute(GETPAGE, args), "UTF-8"); does that seem right? Like for most ignorant Americans, I18N and character encodings are very mysterious to me :)

I'm still having trouble getting the time zone right, though. I've been looking at what you did in your code, but no matter what I do i get times that think they are PST but are in fact EET, For example, as i write this it thinks that the TODOList was last changed at 00:01 PST, when it was really 00:01 EET. If you were really sending UTC, i don't think I'd be getting that. Should that be working yet?

JanneJalkanen: Yeah, that's the correct way to get UTF-8. It's entirely possible that I screwed up something in the TimeZone thing... I didn't really test it properly. BTW, note that XML-RPC does not transport TimeZone information at all, and the Apache XML-RPC library always assumes your default TimeZone when it's reading the timestamp. You'll have to manipulate the result with the Calendar class to make sure it's UTC.

ErnoOxman: It would be nice to have something like setPage( String pagename, base64 text) for the next version of API.

Expect to see J2ME client soon...

JanneJalkanen: Yes, it's probably a good idea to have one as well. I wanted to make sure the getting of pages work before allowing anyone to write a four-line script effectively deleting all pages :-).

A secondary note: Should the putPage API also include a username-password combination?

ErnoOxman: I guess authentication could be done with HTTP Basic Authentication header, which is supported at least in Apache XML-RPC package. If it doesn't feel suitable for some reason, username-password combination would be ok, too.

Funny, I was just thinking of implementing something like this for UseMod and / or TWiki. Seems like it wouldn't take too much to do, and it'd be nice to jump on a common interface bandwagon. Let's see... how quick can I throw it together...

(Oh yeah, and I wandered in off the street from

Something else I was thinking of for my own Wiki RPC API, which might sound strange, was a wiki filter method. That is, accept text, process text for formatting and WikiWords, return content with links and formatting applied (sans wiki header/footer).

My first purpose in mind was to use this as a way to join a weblog and a wiki, where the weblog entries link to the wiki and can be written in the wiki's style.

-- LesOrchard

I'm scratching my head a bit on that API extension. I too am growing very fond of the idea of using Wiki TextFormattingRules to edit my web pages (I've even noticed myself accidently using Wiki style in Word documents). But if you want to look at Wiki pages without the header/footer/left menu, why not just edit the .JSP page that doesn't include those items? Aren't those other elements part of the page design? I'm not sure how Twiki is designed, so admittedly this comment may not be all that germane. Am I misunderstanding your desired effect? Using Wiki to create pages that are not editable by the masses?

This problem does point to one limitation of the current structure of the code. TranslatorReader thinks that the only .JSP that you'd want to view pages with is "wiki.jsp". But if I wanted to have two different views of the data, one that I use to view and potentially edit pages, and another that just shows the pages without suggesting the ability to edit them, there's no good method for that. I think this implies that there's too close a tie between the Model and the View. Maybe if TranslatorReader could take arguments saying what page is for viewing pages.

By the way, I know that the above situation could be partially implemented by using permissions (give me read/write permissions, but no one else). But the pages would still have the "Edit this page" links on them; they just wouldn't work for anyone else. A cleaner way to do that would be interesting.

But then, this latter seems counter to the Wiki Way. Maybe the real solution is to make a Weblog that had wiki-style editing!

(The above is Mahlen rambling on a topic; the above paragraphs are not believed to form a coherent idea) -- MahlenMorris

I agree that the current TranslatorReader is not as independent as it should be. Partly this is because I wanted to avoid the complications of making a completely generic Wiki translator - I figured nobody else would be interested in using the same kind of translator =).

You could do two things, though:

As for a Wiki&Weblog synthesis, PikiePikie is a sort of combination, I believe. I just couldn't really make heads or tails with it =).

Also, you can get the Wiki HTML by using the getPageHTML() methods of the XML-RPC API. That way you get it without the headers/footers.


Oh yeah - and making a common Wiki interface is a cool thing, I agree. Perhaps we should define a standard "wiki." -prefix for all commands, so that you could use "twiki." or "jspwiki." or whatever for app-specific thingies? :-) --JanneJalkanen

Yup, it looks like PikiePikie has something quite similar to what I'm thinking of: A weblog whose entries lead into the wiki itself. In the case of PikiePikie, the weblog is a trick of the wiki itself. What I'm thinking of is where something like BlogApp is used to post a weblog entry to a MovableType weblog, and via some filter (say, a BloggerAPIProxy) which calls on the WikiRPCInterface, that weblog entry is imbued with links to the wiki on the site before it reaches MovableType. So, a site would have a weblog for timely news and updates and a wiki for more long-term idea development. Sure, the wiki's RecentChanges could serve as a source of news and updates, but a weblog is a more explicit tool for that.

(OH, look, I found a discussion of this sort of thing at Wiki:WikiLog. /me wanders over there.)


Excellent, thanks Les! I was pondering about writing an RSS feed for JSPWiki, and now I've got the spec, too. I think it's on target for 1.8.0.


I'm very happy with PikiePikie. It produces RssFeeds for RecentChanges and each weblog. My fondest wish would be if i could have a WeblogEntry actually be a regular wiki page. Anyway, i track some of these things at my (PikiePikie) wiki-weblog AbbeNormal (, and on my Wiki Weblog PIM page.

Do you know about the existing wiki extensions to RSS? See Meatball:RssExtensionModuleForWikis. Also, OpenWiki both emits and embeds RssFeeds.

I'm assuming you've already looked at the links on Meatball:WikiInterchangeFormat.

And thanks for your work on all this! I think there are some great possibilities that we can't even imagine if we get wikis exchanging stuff with each other and other software. The translation aspect between different wiki markup is difficult, but useful results are possible.


Yes, I know of the RSS Wiki standard. Tracking it is covered in RSSFeedForJSPWiki.

I'm sort of dreaming about a RecentChangesPlugin that could download its contents from any RSS feed from any Wiki or Weblog. Something like:

[{INSERT RSSPlugin WHERE source=, since=2d}]

That way I could have a single page with the most interesting changes :-).


Based on the work you have done here I've added experimental XML-RPC and SOAP support for the same methods as you use. You can find the methods (with some limited autogenerated documentation, expect better docs tomorrow) here:

One thing that is very different with my methods is that I have decided to break to ANSI rule of XML-RPC and returns the data as UTF8 anyways. if anyone has a huge problem with that they can just use the SOAP method instead ;-)

Feedback is appriciated! Thanks for this very interesting work! I will follow it and probably evolve it a little bit myself :-)


Whee, this is definitely cool :-). I deliberately wanted to stay compatible with XML-RPC spec because, well, it makes sense to be compatible. Not to mention that the Java XML-RPC library didn't take UTF-8 too well anyway. Also, you'll need to convert the page data anyway, since it's possible to use < and > inside the text, which makes it necessary to turn them into HTML entities. So it doesn't really matter much whether you do the whole UTF-8 into base64 or UTF-8 into escaped UTF-8.

(I cleaned some older stuff away, BTW...)


I had three reasons for not using the base64 approach. 1. I think the ASCII rule in XMLRPC is a huge bug. And Dave Winer does as well ( :-) 2. My main platform is JavaScript... and it can not handle base64 really good... 3. If anyone reallt opposes to it I can just point then to the SOAP implementation ;-)

Do you have any ideas for other methods that we should implement? :-) I we been thinking about making a setPage() method for writing content...


On a secondary note - can you be sure that the newlines on the Wiki page (which tend to be very meaningful) always go through the XML transformation properly? I am not really certain about that myself, but I've found it best not to make assumptions. :-)

Careful reading of XML spec says that newlines go untranslated. So it's okay.

The whole XML-RPC is a bug. Darned infectious at that, I'd say =).

Note that you can, of course, break the XML-RPC standard. You just can't call it XML-RPC anymore, since UserLand software owns the trademark.

I think the proper call for setPage() is something like:

  • setPage( string pageName, base64 text ): Sets the page text. Now, what should it return? The old page text? An error code? An error message?

I think we can do user authentication in

  • a separate call (setPage( string username, string password, string pageName, base64 text), or
  • using HTTP Basic authentication, or
  • allow both.


If Dave Winer breaks XMLRPC in that way I will as well :-) And if Userlands don't want me to I will just take down the XMLRPC end of that web service.

As for escaping the HTML: the string a return is inside a CDATA so it can contain any markup besides the end of the CDATA section (which OpenWiki will fial on anyways :-). So, because of this bug in Openwiki I won't have this problem. But of course this is not a very good way of doing it... the CDATA sections need to be escaped.

The setPage() seems good. I will try to implement it later today. I would say we go for: setPage(pageName, text, username, password)


Good point on Dave. So, I was going to release 1.7.0 over the weekend, which probably should have the API fixed. Shall we go with the "UTF-8 in strings" or "UTF-8 in base64" -approach? --JanneJalkanen

Just skimming through updates since the last time I visited this page, but... I'm thinking this week of working up an implementation of this API on top of UseModWiki v0.92 and TWiki. Don't have much time to write at the moment, but wanted to drop in my US$0.02 about the authentication thing...

I'd say just use basic HTTP authentication and keep the username/password stuff out of the API. Not all wikis have username/password and besides, I thought the point of XML-RPC was to build up on top of what you already have... that being, in part, a web server capable of handling authentication.

-- LesOrchard

MahlenMorris: I'm not yet actually convinced of the point of an RPC setPage(). When would i programmatically want to edit text? If it's going to warp the whole makeup of the Wiki by introducing authentication at this level, I'm not sure it's WikiNature to do it.

Could someone convince or suggest to me what one would do with this feature?

JanneJalkanen: If you want to write a J2ME client for small devices, perhaps? Or a Java WebStart-enabled editor on your desktop? Or an Emacs-based editor?

I think the current HTML TextArea is okay, but under no circumstances it is the ideal editor. :-)

First bit of update from me: I've got an initial stab at the XML-RPC interface for TWiki working.

The other thing, with regard to the point of programmatically editing text... Two issues: Why do it in the first place, and why place it behind access control?

Access control, in my mind, would be optional and up to the Wiki owner. (ie. TWiki wikis can be open, closed, or half-open at the owner's choice, and user registration facilities exist.) Especially if the user/pass is left up to the web server, the API and Wiki doesn't have to worry about it.

As for why do it in the first place... The first obvious thing is a non-browser authoring tool (ie. a better emacs-wiki-mode?) Another thing that might not be so obviously useful at first are wiki topics automatically maintained by agents outside the wiki software. Logs from services/daemons? Mirror topic content between two wikis based on two different engines (say MoinMoin in Python and UseMod in Perl)? I can probably think of some more...

-- LesOrchard

Mirroring would be cool. But then you'll get some interesting problems with the different WikiMarkup people use.

Oh, and some observations while implementing this API tonight, with regards to implementing in other languages and wikis:

  • While I was able to implement the methods whose names were the same yet parameter signatures were different, we may want to change that. I'm not sure all implementations across different languages will be happy about this. ie. getPage(name) and getPageByVersion(name, version)
  • XML-RPC does have a convention for returning exceptions as faults consisting of numerical error code and verbose description. It'd be great to define some error conditions for each of these methods. ie. getPage can fault on page not found.
  • Are versions always integers in all Wiki implementations that have them? In TWiki, they're technically RCS versions (ie. 1.1, 1.2, .., y.x) but mostly they stay in the 1.x branch. So I was able to just chop off the 1. and use the x for the API. But we might want to use a string for versions.
  • Finally, instead of jspwiki.* I used the wiki.* prefix for all my methods. Planning to follow your suggestion, JanneJalkanen, to use a twiki.* prefix for any TWiki-specific methods.


Some answers:

  • Yes, good point. I'm too used to method overloading that I didn't even think about it.
  • I am not too sure how to do that in Apache XML-RPC, but that is probably a good idea.
  • Technically, so are the JSPWiki versions. I just thought it's stupid to show them to the user, so I'm using just a plain number. I think it's up to the Wiki itself to decide a suitable mapping between version number and it's internal count.
  • Yes, "wiki.*" is the correct, methinks. I'll change this in the next release as well.


Method overloading. Man, now why did that term slip my mind? Sheesh. It's not like I've never done anything in Java before. All this Perl is rotting my brane. :)

As for doing faults in Apache XML-RPC, it appears that there's a XmlRpcServer.Worker.writeError method, but not having done anything with this in Java yet, I'm not sure whether you call this directly or if you need to throw an exception and let the package handle it.

And as for the Wiki doing its own handling of the version number to whatever it uses internally... that's probably fair enough, in the interest of establishing something common between wikis.

Next, I see what I can do with UseMod :)


Hi, I'm one of the TWiki-ites... Interesting stuff, had a quick look at Les's code and was impressed by how simple it was to implement this for TWiki.

I'd be interested to hear how people think XML-RPC will be used on Wikis - e.g. is it mainly for getting RecentChanges or for building alternative viewing or editing UIs? The J2ME example is a good one, particularly for devices that can't have full-blown browsers.

One licensing comment on Les's code - it probably needs to be GPLed because it is linking to TWiki functions that are GPLed.


I just updated the license to GPL, since I'm not necessarily attached to the Artistic License :)


MahlenMorris: OK, I now can understand the value of setPage(). Very cool.

Here's a couple examples of what I'll be/am using XML-RPC for. I've written a little page running on my server that can conglomerate pages from this Wiki and put them all together in one page, suitable for printing or snarfing into a Palm or Rocket eBook. It's currently at It's the JSPWiki XML-RPC interface in action!

Also for email notifications of page changes. See NotificationList for a running example of this.

And the nice thing is that neither of these applications required me to convince Janne to add the code to the system, or mangle my installation in some hard-to-upgrade fashion. Plus, if another WikiEngine implements the same API, this client code will work with it too. Dang me, this "loose-coupling" thing is even handier than i thought.

As a side note, I actually viewed and edited pages on this Wiki with a web-connected Palm this last weekend. It worked pretty well (except that diffs don't show up), but writing on a Palm made me much more terse than usual. Trust me, it's hard to see that as a frequently used text input device for a Wiki :) But for accessing pages, yes.

Whoa! I'm impressed. Seriously. Your code makes it really handy to write technical documentation, or role-playing game logs, or whatever, then carry it with you.

And, I think we need something like array listLinks( string pageName ) so that we could do things like "please print me this page and all pages that it links to".


MahlenMorris: Why thank you, Janne. It's really not much code at all. I was hoping that this would work well with AvantGo as well, so that the pages you care about get snarfed into the Palm when you sync, but AvantGo seems to have some tight size restrictions on how large a single page can be; even 67K was too big (i got the Size Limit Error, no matter how how much space I allocated to the channel). I'll ponder other ways to solve that...

I need to better parameterize the code I currently have before I'll release it. Maybe by early next week.

listLinks() would be very handy, especially since different Wiki technologies use different syntaxes for links.

I've been thinking about how to handle the "this page and all pages that it links to" issue. I was thinking of a web interface where you pick a starting page, and then it displays all the linked-to pages, user selects some of them, page shows their children, and so on, gradually building up a set of pages until the user is done, and the page results from that. Certainly there are places in Ward's Wiki where i wish i could do that. This may be more complicated to use than it's worth, though.

But i will say it's very interesting to see the names of all the pages in one list; I found myself thinking, "What's that topic? What possible chain of pages could have led to it?"

JanneJalkanen: listLinks() is now a part of API, as of 1.6.12. I'm saving putPage() for 1.7.x branch. :-)

Next update from me: I've got an initial stab at the XML-RPC interface for UseModWiki working.


I'm having trouble deciding between base64 and straight UTF-8 representation of page data. The advantage with the former is that it's standards compliant, but it's more inconvinient (for example, for people who are working with Javascript). The advantage with the latter is that it's much easier to work with, but it breaks the XML-RPC standard. Also, not all implementations can actually work with the UTF-8 strings - for example, you need to patch the Apache XML-RPC library to work with UTF-8, otherwise it loses all information. Also, the MinML parser it uses must be replaced with a fully standards-compliant browser such as Xerces, which roughly triples the distribution size...

Does anyone have any other opinions?


MahlenMorris: On the listLinks() method:
  • I think differentiating between a generic external link URL and a image link would be good, since I may not be aware exactly what types of links you consider allowable images. Then I could possibly pull the inlined image down for offline inclusion as well. So maybe an additional type for images that this Wiki allows to be inlined.
  • If the intent of this API is in some way to make it useful even beyond JSPWiki (which I think would be a very good intent), then one thing it would be useful to have here is some way of finding this link in the processed HTML page. Currently for my printing page, for example, I have some very JSPWiki specific code that looks for links and, if this a link is for a page that is in the aggregated page, i replace the link with a reference to an anchor within the aggregate page. Thus, if you had both the WikiUsers and JanneJalkanen page in the aggregate, the link to JanneJalkanen in the WikiUsers page now moves you to the JanneJalkanen part of the aggregate, rather than the JanneJalkanen page on the web.

So some way of finding the link in a more generic way would be good. I'm thinking that just providing the HREF string that I should expect to see from a getPageHTML() call would do the trick; then I could search the HTML for the link, and replace it with my link. So for example, the record for this link would be MahlenMorris, 0, Wiki.jsp?page=MahlenMorris.

  • If a page is linked to twice in a page, is it listed twice in the returned array? I'm not sure if I want it to be or not, I just want the spec to define this behavior. Listing it twice is good, it lets you know precisely what links are available. The caller can always fold duplicates together on it's own.
  • I'm getting the nagging feeling in the back of my head that I'd also want to know the Wiki Server URL that this goes to (if it's a Wiki page). Yes, I do, because in the offline version the non-local pages need to point back the real Wiki server, and the href above doesn't contain that. But I think that info more properly belongs in a getSystemInfo() call of some sort.

Add new attachment

Only authorized users are allowed to upload new attachments.
« This particular version was published on 24-Feb-2002 23:01 by