Moved this discussion from WikiRPCInterface --JanneJalkanen)

I have a question,

Having taken a quick look at XMLRPC I can appreciate that it is simpler than SOAP and that's a good thing. However I wonder, given that SOAP appears to have the weight of W3C behind it whether XMLRPC, in the long run, is going to be a dead end?

This is not meant to be a flame. Whilst I am very interested in Web services I don't have an axe to grind about them yet, nor fixed views on how they should be implemented.

I'm just curious about the choice of XMLRPC and whether it's a good strategy?


It's not like we absolutely exclude SOAP - in fact, I'll almost certainly make a SOAP interface whenever I see the need for it, and then we offer three RPC interfaces. No biggie. XMLRPC was simpler, easier to set up, far easier to understand, and it's not a MovingTarget like SOAP. (SOAP has currently many problems, such as IBM and Microsoft battling over the dominance of who owns the WebService model.)


ErnoOxman: It would be nice to have something like setPage( String pagename, base64 text) for the next version of API.

Expect to see J2ME client soon...

JanneJalkanen: Yes, it's probably a good idea to have one as well. I wanted to make sure the getting of pages work before allowing anyone to write a four-line script effectively deleting all pages :-).

A secondary note: Should the putPage API also include a username-password combination?

ErnoOxman: I guess authentication could be done with HTTP Basic Authentication header, which is supported at least in Apache XML-RPC package. If it doesn't feel suitable for some reason, username-password combination would be ok, too.

Funny, I was just thinking of implementing something like this for UseMod and / or TWiki. Seems like it wouldn't take too much to do, and it'd be nice to jump on a common interface bandwagon. Let's see... how quick can I throw it together...

(Oh yeah, and I wandered in off the street from

Something else I was thinking of for my own Wiki RPC API, which might sound strange, was a wiki filter method. That is, accept text, process text for formatting and WikiWords, return content with links and formatting applied (sans wiki header/footer).

My first purpose in mind was to use this as a way to join a weblog and a wiki, where the weblog entries link to the wiki and can be written in the wiki's style.

-- LesOrchard

I'm scratching my head a bit on that API extension. I too am growing very fond of the idea of using Wiki TextFormattingRules to edit my web pages (I've even noticed myself accidently using Wiki style in Word documents). But if you want to look at Wiki pages without the header/footer/left menu, why not just edit the .JSP page that doesn't include those items? Aren't those other elements part of the page design? I'm not sure how Twiki is designed, so admittedly this comment may not be all that germane. Am I misunderstanding your desired effect? Using Wiki to create pages that are not editable by the masses?

This problem does point to one limitation of the current structure of the code. TranslatorReader thinks that the only .JSP that you'd want to view pages with is "wiki.jsp". But if I wanted to have two different views of the data, one that I use to view and potentially edit pages, and another that just shows the pages without suggesting the ability to edit them, there's no good method for that. I think this implies that there's too close a tie between the Model and the View. Maybe if TranslatorReader could take arguments saying what page is for viewing pages.

By the way, I know that the above situation could be partially implemented by using permissions (give me read/write permissions, but no one else). But the pages would still have the "Edit this page" links on them; they just wouldn't work for anyone else. A cleaner way to do that would be interesting.

But then, this latter seems counter to the Wiki Way. Maybe the real solution is to make a Weblog that had wiki-style editing!

(The above is Mahlen rambling on a topic; the above paragraphs are not believed to form a coherent idea) -- MahlenMorris

I agree that the current TranslatorReader is not as independent as it should be. Partly this is because I wanted to avoid the complications of making a completely generic Wiki translator - I figured nobody else would be interested in using the same kind of translator =).

You could do two things, though:

As for a Wiki&Weblog synthesis, PikiePikie is a sort of combination, I believe. I just couldn't really make heads or tails with it =).

Also, you can get the Wiki HTML by using the getPageHTML() methods of the XML-RPC API. That way you get it without the headers/footers.


Oh yeah - and making a common Wiki interface is a cool thing, I agree. Perhaps we should define a standard "wiki." -prefix for all commands, so that you could use "twiki." or "jspwiki." or whatever for app-specific thingies? :-) --JanneJalkanen

Yup, it looks like PikiePikie has something quite similar to what I'm thinking of: A weblog whose entries lead into the wiki itself. In the case of PikiePikie, the weblog is a trick of the wiki itself. What I'm thinking of is where something like BlogApp is used to post a weblog entry to a MovableType weblog, and via some filter (say, a BloggerAPIProxy) which calls on the WikiRPCInterface, that weblog entry is imbued with links to the wiki on the site before it reaches MovableType. So, a site would have a weblog for timely news and updates and a wiki for more long-term idea development. Sure, the wiki's RecentChanges could serve as a source of news and updates, but a weblog is a more explicit tool for that.

(OH, look, I found a discussion of this sort of thing at Wiki:WikiLog. /me wanders over there.)


Excellent, thanks Les! I was pondering about writing an RSS feed for JSPWiki, and now I've got the spec, too. I think it's on target for 1.8.0.


I'm very happy with PikiePikie. It produces RssFeeds for RecentChanges and each weblog. My fondest wish would be if i could have a WeblogEntry actually be a regular wiki page. Anyway, i track some of these things at my (PikiePikie) wiki-weblog AbbeNormal, and on my Wiki Weblog PIM page.

Do you know about the existing wiki extensions to RSS? See Meatball:RssExtensionModuleForWikis. Also, OpenWiki both emits and embeds RssFeeds.

I'm assuming you've already looked at the links on Meatball:WikiInterchangeFormat.

And thanks for your work on all this! I think there are some great possibilities that we can't even imagine if we get wikis exchanging stuff with each other and other software. The translation aspect between different wiki markup is difficult, but useful results are possible.


Yes, I know of the RSS Wiki standard. Tracking it is covered in RSSFeedForJSPWiki.

I'm sort of dreaming about a RecentChangesPlugin that could download its contents from any RSS feed from any Wiki or Weblog. Something like:

[{INSERT RSSPlugin WHERE source=, since=2d}]

That way I could have a single page with the most interesting changes :-).


Oh, like OpenWiki's:


That renders as headlines only. They also have another:


which renders as headlines & descriptions of 'all' feeds Syndicated on WikiRPCInterface.

On a separate note, you may want to see if you can use this to interface with Bloglet (update-email service):


Based on the work you have done here I've added experimental XML-RPC and SOAP support for the same methods as you use. You can find the methods (with some limited autogenerated documentation, expect better docs tomorrow) here:

One thing that is very different with my methods is that I have decided to break to ANSI rule of XML-RPC and returns the data as UTF8 anyways. if anyone has a huge problem with that they can just use the SOAP method instead ;-)

Feedback is appriciated! Thanks for this very interesting work! I will follow it and probably evolve it a little bit myself :-)


Whee, this is definitely cool :-). I deliberately wanted to stay compatible with XML-RPC spec because, well, it makes sense to be compatible. Not to mention that the Java XML-RPC library didn't take UTF-8 too well anyway. Also, you'll need to convert the page data anyway, since it's possible to use < and > inside the text, which makes it necessary to turn them into HTML entities. So it doesn't really matter much whether you do the whole UTF-8 into base64 or UTF-8 into escaped UTF-8.

(I cleaned some older stuff away, BTW...)


I had three reasons for not using the base64 approach. 1. I think the ASCII rule in XMLRPC is a huge bug. And Dave Winer does as well ( :-) 2. My main platform is JavaScript... and it can not handle base64 really good... 3. If anyone reallt opposes to it I can just point then to the SOAP implementation ;-)

Do you have any ideas for other methods that we should implement? :-) I we been thinking about making a setPage() method for writing content...


On a secondary note - can you be sure that the newlines on the Wiki page (which tend to be very meaningful) always go through the XML transformation properly? I am not really certain about that myself, but I've found it best not to make assumptions. :-)

Careful reading of XML spec says that newlines go untranslated. So it's okay.

The whole XML-RPC is a bug. Darned infectious at that, I'd say =).

Note that you can, of course, break the XML-RPC standard. You just can't call it XML-RPC anymore, since UserLand software owns the trademark.

I think the proper call for setPage() is something like:

  • setPage( string pageName, base64 text ): Sets the page text. Now, what should it return? The old page text? An error code? An error message?

I think we can do user authentication in

  • a separate call (setPage( string username, string password, string pageName, base64 text), or
  • using HTTP Basic authentication, or
  • allow both.


If Dave Winer breaks XMLRPC in that way I will as well :-) And if Userlands don't want me to I will just take down the XMLRPC end of that web service.

As for escaping the HTML: the string a return is inside a CDATA so it can contain any markup besides the end of the CDATA section (which OpenWiki will fial on anyways :-). So, because of this bug in Openwiki I won't have this problem. But of course this is not a very good way of doing it... the CDATA sections need to be escaped.

The setPage() seems good. I will try to implement it later today. I would say we go for: setPage(pageName, text, username, password)


Good point on Dave. So, I was going to release 1.7.0 over the weekend, which probably should have the API fixed. Shall we go with the "UTF-8 in strings" or "UTF-8 in base64" -approach? --JanneJalkanen

Just skimming through updates since the last time I visited this page, but... I'm thinking this week of working up an implementation of this API on top of UseModWiki v0.92 and TWiki. Don't have much time to write at the moment, but wanted to drop in my US$0.02 about the authentication thing...

I'd say just use basic HTTP authentication and keep the username/password stuff out of the API. Not all wikis have username/password and besides, I thought the point of XML-RPC was to build up on top of what you already have... that being, in part, a web server capable of handling authentication.

-- LesOrchard

MahlenMorris: I'm not yet actually convinced of the point of an RPC setPage(). When would i programmatically want to edit text? If it's going to warp the whole makeup of the Wiki by introducing authentication at this level, I'm not sure it's WikiNature to do it.

Could someone convince or suggest to me what one would do with this feature?

JanneJalkanen: If you want to write a J2ME client for small devices, perhaps? Or a Java WebStart-enabled editor on your desktop? Or an Emacs-based editor?

I think the current HTML TextArea is okay, but under no circumstances it is the ideal editor. :-)

First bit of update from me: I've got an initial stab at the XML-RPC interface for TWiki working.

The other thing, with regard to the point of programmatically editing text... Two issues: Why do it in the first place, and why place it behind access control?

Access control, in my mind, would be optional and up to the Wiki owner. (ie. TWiki wikis can be open, closed, or half-open at the owner's choice, and user registration facilities exist.) Especially if the user/pass is left up to the web server, the API and Wiki doesn't have to worry about it.

As for why do it in the first place... The first obvious thing is a non-browser authoring tool (ie. a better emacs-wiki-mode?) Another thing that might not be so obviously useful at first are wiki topics automatically maintained by agents outside the wiki software. Logs from services/daemons? Mirror topic content between two wikis based on two different engines (say MoinMoin in Python and UseMod in Perl)? I can probably think of some more...

-- LesOrchard

Mirroring would be cool. But then you'll get some interesting problems with the different WikiMarkup people use.

Oh, and some observations while implementing this API tonight, with regards to implementing in other languages and wikis:

  • While I was able to implement the methods whose names were the same yet parameter signatures were different, we may want to change that. I'm not sure all implementations across different languages will be happy about this. ie. getPage(name) and getPageByVersion(name, version)
  • XML-RPC does have a convention for returning exceptions as faults consisting of numerical error code and verbose description. It'd be great to define some error conditions for each of these methods. ie. getPage can fault on page not found.
  • Are versions always integers in all Wiki implementations that have them? In TWiki, they're technically RCS versions (ie. 1.1, 1.2, .., y.x) but mostly they stay in the 1.x branch. So I was able to just chop off the 1. and use the x for the API. But we might want to use a string for versions.
  • Finally, instead of jspwiki.* I used the wiki.* prefix for all my methods. Planning to follow your suggestion, JanneJalkanen, to use a twiki.* prefix for any TWiki-specific methods.


Some answers:

  • Yes, good point. I'm too used to method overloading that I didn't even think about it.
  • I am not too sure how to do that in Apache XML-RPC, but that is probably a good idea.
  • Technically, so are the JSPWiki versions. I just thought it's stupid to show them to the user, so I'm using just a plain number. I think it's up to the Wiki itself to decide a suitable mapping between version number and it's internal count.
  • Yes, "wiki.*" is the correct, methinks. I'll change this in the next release as well.


Method overloading. Man, now why did that term slip my mind? Sheesh. It's not like I've never done anything in Java before. All this Perl is rotting my brane. :)

As for doing faults in Apache XML-RPC, it appears that there's a XmlRpcServer.Worker.writeError method, but not having done anything with this in Java yet, I'm not sure whether you call this directly or if you need to throw an exception and let the package handle it.

And as for the Wiki doing its own handling of the version number to whatever it uses internally... that's probably fair enough, in the interest of establishing something common between wikis.

Next, I see what I can do with UseMod :)


Hi, I'm one of the TWiki-ites... Interesting stuff, had a quick look at Les's code and was impressed by how simple it was to implement this for TWiki.

I'd be interested to hear how people think XML-RPC will be used on Wikis - e.g. is it mainly for getting RecentChanges or for building alternative viewing or editing UIs? The J2ME example is a good one, particularly for devices that can't have full-blown browsers.

One licensing comment on Les's code - it probably needs to be GPLed because it is linking to TWiki functions that are GPLed.


I just updated the license to GPL, since I'm not necessarily attached to the Artistic License :)


MahlenMorris: OK, I now can understand the value of setPage(). Very cool.

Here's a couple examples of what I'll be/am using XML-RPC for. I've written a little page running on my server that can conglomerate pages from this Wiki and put them all together in one page, suitable for printing or snarfing into a Palm or Rocket eBook. It's currently at It's the JSPWiki XML-RPC interface in action!

Also for email notifications of page changes. See NotificationList for a running example of this.

And the nice thing is that neither of these applications required me to convince Janne to add the code to the system, or mangle my installation in some hard-to-upgrade fashion. Plus, if another WikiEngine implements the same API, this client code will work with it too. Dang me, this "loose-coupling" thing is even handier than i thought.

As a side note, I actually viewed and edited pages on this Wiki with a web-connected Palm this last weekend. It worked pretty well (except that diffs don't show up), but writing on a Palm made me much more terse than usual. Trust me, it's hard to see that as a frequently used text input device for a Wiki :) But for accessing pages, yes.

Whoa! I'm impressed. Seriously. Your code makes it really handy to write technical documentation, or role-playing game logs, or whatever, then carry it with you.

(See WikiRPCInterfaceListLinks)


MahlenMorris: Why thank you, Janne. It's really not much code at all. I was hoping that this would work well with AvantGo as well, so that the pages you care about get snarfed into the Palm when you sync, but AvantGo seems to have some tight size restrictions on how large a single page can be; even 67K was too big (i got the Size Limit Error, no matter how how much space I allocated to the channel). I'll ponder other ways to solve that...

I need to better parameterize the code I currently have before I'll release it. Maybe by early next week.


I will say it's very interesting to see the names of all the pages in one list; I found myself thinking, "What's that topic? What possible chain of pages could have led to it?"

Next update from me: I've got an initial stab at the XML-RPC interface for UseModWiki working.


Starting to poke at MoinMoin for an implementation of the API, which doesn't look to be that hard really. Will then update my implementations up to the recent changes in the API here. Does anyone think I'm nutty yet? :)

One thing I'm thinking about as another method for the API would facilitate bridging between wikis: getPageWIF() and setPageWIF(), where WIF stands for WikiInterchangeFormat. As someone mentioned earlier, mirroring/bridging is hard because of the varying WikiMarkup styles. This would solve it, putting the burden of translating local WikiMarkup into common interchange format. A bridge between two wikis would simply grab the WIF from one wiki and hand that to the other.

The problem is, though... There is no WikiInterchangeFormat. But, we shouldn't let that stop us. :)


Wow, this page is starting to get huge :) At some point, it should probably be refactored, maybe into a summary DocumentMode page and a ThreadMode chitchat page.

One more suggestion for a method:

  • array searchPages(base64 search_terms): Returns a list of all pages within which a match to search_terms is found. The result is an array of strings, again UTF-8 in URL encoding.

I think with that and setPage(), almost all the most common wiki functions are covered.


JanneJalkanen: I just realized that we're now a fully hypeword compliant Web service. Scary... :-)

MahlenMorris: I've been thinking the Web Service phrase myself. It's making me feel all tingly ;)

MahlenMorris: I've noticed a bug in the recentChanges() RPC code; as your comment there wonders, it's not doing the time zone conversion correctly. I noticed this because I was seeing a file on the change list that shouldn't have been there. The only code I've been able come up with that correctly determines the UTC-local time zone offset is: Calendar cal = Calendar.getInstance(); TimeZone local = TimeZone.getDefault(); long offset = local.getOffset(cal.get(Calendar.ERA), cal.get(Calendar.YEAR), cal.get(Calendar.MONTH), cal.get(Calendar.DAY_OF_MONTH), cal.get(Calendar.DAY_OF_WEEK), 0); System.out.println(offset); offset is thus the number of millis to add to UTC to get local time. This is working correctly for me; in my case (Pacific Standard Time) it's -28800000. Seems awkward, I know, but i think the complexity is due to the fact that the offset changes over time, and that some places, like Hawaii, have actually changed their offsets at points in the past. Java 1.4 has a slightly better version of getOffset(), but I didn't want to lock my code to Java 1.4.

JanneJalkanen: I think you're correct. Note, however that your code fails twice a year, since in most countries the DST change occurs at 4 am - NOT on midnight. :-)

MahlenMorris: Good point. If someone can figure out how to get the seconds value that TimeZone.getOffset(), I'd love to hear it, as I'm stumped. Of course, this code would also be wrong if the RPCClient object was held onto past the DST changeover.

JanneJalkanen: I think the following code works. d is the time in local time, and after this, cal.getTime() will return you UTC.

        Calendar cal = Calendar.getInstance();
        cal.setTime( d );
        cal.add( Calendar.MILLISECOND, 
                 - (cal.get( Calendar.ZONE_OFFSET ) + 
                    (cal.getTimeZone().inDaylightTime( d ) ?
                     cal.get( Calendar.DST_OFFSET ) : 0 )) );

MahlenMorris: This morning on the bus to work I thought it would be cool to somehow mock up the WikiRPCInterface onto the Great Mother Wiki, something that would act as an XML-RPC interface to the WikiWikiWeb, thus allowing (for example) Hula to interact with it. However, looking at it now, I think that the venerable WikiWikiWeb is a bit lacking in information that the WikiRPCInterface likes to have. You only know the day that something was last edited, you can't really look back many versions; there's just a lot missing. Pity, cause there's a lot of people on that site interacting on interesting topics, it'd be nice to be able to parcel up the changes differently. That was sort of the inspiration for the PrintWiki utility, the desire to be able to print out a bunch of interlinked pages to read at my leisure.

Upon further pondering, I guess you could achieve this by querying the Recent Changes page of every hour and saving the meta data locally. It's a brutal way to do it, and you'd basically be recreating the site in the intervening machine (plus the extra metadata), but it might be a neat way to introduce the WikiWikiWeb people to the WikiRPCInterface.

JanneJalkanen: The original wiki uses the concept of forgive and forget, that is, old mistakes should not haunt around too long. This is something you can't have if you store every change ever. But I don't think that is really preventing the use of the WikiRPCInterface... I think you could get a long way just simply by parsing the HTML that is returned from the Wiki. You can easily get the plain text by reading from the editor ( and taking everything between <textarea> and </textarea>. HTML is not a problem, though you would get the header/footer as well.

And I guess that's pretty much it... RecentChanges you'd have to parse yourself, since the original Wiki does not support RSS. But the format is simple. Yeah, I guess it would be doable.

MahlenMorris: I'm now actively working on it. Currently under development at Hoop.

I am implementing MoinMoin:AdoptedPages. This feature uses a kind of RecentChanges server. Until now I planned to use the WikiRpc interface (getRecentChanges). But for really tranfering RecentChanges from one wiki to another there is a lot of stuff missing in the interface. The values transmitted with the wiki RSS feeds look much more usefull. I think I'll extend the data return by getRecentChanges with some of the RSS values (link to page, diff link, ...) Any opinions about this? -- MoinMoin:FlorianFesti

It would be useful to have a pageHistory call too.

--AnonymousCoward, 06-Jul-2007

I have some trouble in jspwiki rpc2, I set the url url: "/proxy/", the post data is "<?xml version=\"1.0\"?><methodCall><methodName>wiki.getBackLinks</methodName><params><param><value> <string>main</string></value></param></params></methodCall>", the string is encoding with utf-8,but the response is <?xml version="1.0"?><methodResponse><fault><value><struct><member><name>faultString</name><value> java.lang.NoSuchMethodException: com.ecyrd.jspwiki.xmlrpc.RPCHandlerUTF8. getBackLinks(java.lang.String)< /value></member><member><name>faultCode </name><value><int>0</int></value></member> </struct></value></fault></methodResponse> I test all the version 2's new functions is response the same error message. but when i test the version 1,is runs well. anybody can tell me why? -- Tony.Liu
We don't support version 2 here. It is just a draft plan.

-- JanneJalkanen

Add new attachment

Only authorized users are allowed to upload new attachments.
« This page (revision-30) was last changed on 14-Mar-2008 00:06 by JanneJalkanen