Domino Upgrade

VersionSupport end
Upgrade to 9.x now!
(see the full Lotus lifcyle) To make your upgrade a success use the Upgrade Cheat Sheet.
Contemplating to replace Notes? You have to read this! (also available on Slideshare)


Other languages on request.


Useful Tools

Get Firefox
Use OpenDNS
The support for Windows XP has come to an end . Time to consider an alternative to move on.

About Me

I am the "IBM Collaboration & Productivity Advisor" for IBM Asia Pacific. I'm based in Singapore.
Reach out to me via:
Follow notessensei on Twitter
Amazon Store
Amazon Kindle
NotesSensei's Spreadshirt shop
profile for stwissel on Stack Exchange, a network of free, community-driven Q&A sites


Extracting data from Domino into PDF using XSLT and XSL:FO (Part 5)

QuickImage Category  
This entry is part of the series Domino, XSL:FO and XSLT that dives into the use of XSLT and XSL:FO in IBM Lotus Domino.

XSL:FO and XSLT are text based formats, so you could use your favorite text editor (or a modern heir) to write your XML. You also can poke yourself in they eye.
A XSL* transformation is code running with pattern matching, priorities (and closing tags), so the probability you get it right, especially when you are new to the domain approaches zero. A good content aware editor and a debugger tool is paramount. When looking around, most of these editors are geared towards developers. Here are some I tested or worked with: There are more, so you want to do some research yourself. Of course you can debug the hard way or just use the Eclipse XSLT Debugger.
Being able to write good XSLT, doesn't make you write good XSL:FO automatically. So you want to make sure the tools support that as well. The above tools do FO too and there are some specialized editors around to choose from. However they have limited appeal to a non-developer user.
A suitable tool for business users (still they would need to have some IT aptitude is the Java4Less FO Designer. It is modeled not based on an XML tree, but like other report generators, so business users might feel more familiar:

The price is modest and you only need licences for people who want to design FO reports, not to run them (that's pure Java, as you learned already).
As usual YMMV.


10 Commandments for public facing web applications

QuickImage Category  
A customer recently asked how a public facing web application on Domino would be different from an Intranet application. In general there shouldn't be a difference. However in a cost/benefit analysis Intranets are usually considered "friendly territory", so less effort is spent on hardening against attacks and poking around (much to my delight, when I actually poke around). With this in mind here you go (in no specific order):
  1. Protect your server: Typically you would have a firewall and reverse proxy that provides access to your application.
    It should be configured to check URLs carefully to ensure no unexpected calls are made from somebody probing database URLs. It is quite some work to get that right (for any platform), but you surly don't want to become "data leak" front-page news.
    There's not much to do on the Domino side, it is mostly the firewall guys' work. Typical attack attempts include stuff like ?ReadViewEntries, $Defaultxx or $First. Of course when you use Ajax calls into views you need to cater for that.
    I would block *all* ?ReadViewEntries and have URL masks for the Ajax calls you plan to use. Be careful with categorized views. Avoid them if possible and always select "hide empty categories". Have an empty $$ViewTemplateDefault that redirects to the application
  2. Mask your URLs: Users shouldn't go to "/newApp/Loand2013/loanapproduction.nsf?Open" but to "/loans". Use Internet site documents to configure that (eventually the firewall/reverse proxy can do that too). In Notes 9.0 IBM provides mod_domino, so you can use the IBM HTTP Server (a.k.a Apache HTTP) to front Domino. On the XPagesWiki there is more information on securing URLs with redirects. Go and read them
  3. Harden your agents: Do not allow any ?OpenAgent URL (challenge: an agent also opens on ?Open, so if all agents have a certain naming you can use URL pattern to block them). In an agent make sure your code handles errors properly. Check where the call to an agent came from. If it was called directly discard it.
  4. Treat data with suspicion: Do not rely on client side validation. Providing it is nice for the user as comfortable input aid. However you don't control the devices and browsers anymore and an attacker can use Firebug or CURL to bypass any of your validations. You have to validate everything on the server (again). Also you have to check content for unexpected input like passthru HTML or JavaScript. XPages does that for you
  5. Know your user: Split your application into more than one database. One for the publicly accessible content (access anonymous) and one that requires authentication. Do not try to dodge authenticated users and re-invent security mechanisms. You *will* overlook something and then your organisation makes headline news in the "latest data breach" section. There are ample examples how to generate LTPA tokens outside of Domino, so you don't need to manage usernames/passwords etc if you don't want to. Connect them to your existing customer authentication scheme (e.g. eBanking if you are a bank) for starters. Do not rely on some cookie you try to interpret and then show or don't show content. The security tools at your hand are are ACL and reader fields
  6. Test, Test, Test: You can Usability test, load test, functional test, penetration test, validity test, speed test and unit test. If you don't test, the general public and interested 3rd parties will do that for you. The former leads to bad press, the later to data breaches
  7. Use a responsive layout: Use the IBM OneUI (v3.0 as of this blog date) or Bootstrap (get a nice theme). XPages provides great mobile controls. Using an XPage single page application you can limit the range of allowed URLs to further protect your assets
  8. Code for the most modern browser: use HTML5 and degrade gracefully. So it is not "must look the same in all browsers", but "users must be able to complete tasks in all browsers" - experienced might differ. Take advantage of local cache (use an ETag and all the other tips!)
  9. Use https the very moment a user is known. If in doubt try Firesheep
  10. Of course the Spolsky Test applies here too!
As usual YMMV


Explaining web enablement challenges to business users

QuickImage Category  
With XPages Notes and Domino application can be the new sexy and run beautifully on all sorts of devices big and small. So a whole cottage industry (no insult intended) of offerings around Domino Application Modernization appeared.
Modernization always also means: browser and mobile enablement.
Expectations ran high, that a magic button would transform (pun intended) a code base organically grown over two decades into beautiful working responsive web 2.0 applications. But GIGO stands firm and not all applications are created equal. Domino's (and LotusScript's) greatest strength, turned into a curse: being an incredible forgiving environment. Any clobbered together code would somehow still run and lots of applications truly deserve the label "contains Frankencode".
There is a lot of technical debt than needs to be paid.
The biggest obstacle I've come across is the wild mix of the front-end (a.k.a Notes client) and back-end (core database operations) in forms views and libraries. This problem never arises in the popular web environments, since there are different languages at the front and back at work (e.g. JavaScript/PHP, JavaScript/Ruby, JavaScript/Java) - only in very modern environments it is all JavaScript (the single language idea Notes sported 20 years ago).
The first thing I taught every developer in LotusScript, is to keep front- and backend separate and keep the business logic in script libraries that only contain back-end classes. Developers who followed these guidelines have a comparable easy time to web enable application.
But how to explain this problem to a business user (who probably saw some advertisement about automatic conversion to web, be it on IBM technology or a competitor)?
Tell them a story (if they are not interested at listening at any of that, there's a solution too)!
Here we go:
You are supply specialist for natural resources exploration company and your current assignment is to get your geo engineers set up in a remote jungle location. So you have to source vehicles, build roads and establish a supply chain. Probably you get a bunch of those (a living legent since 1948), stock spare parts and ensure that you have diesel stations along the way.
Everything is fine - the road might be a little patchy here and there, but that's not a problem, you get you guys delivered and working. You even look good (sometimes).
This are your Notes client applications, delivering business value, robust, efficient and can deal with a lot of road deficiency (that would be code quality).
Your remote location becomes successful and suddenly the requirements change. People want to get there in style (Browsers). Your gas stations will do, no problem here, but already the roads need to be a little less patchy and your stock of spare parts and the mechanics trained on them are useless. That would be your front-end classes and the "mix-them-all-up" coding style that worked in the past.
If the "arrive-in-style" meme escalates further (mobile devices) you need to build flawless roads (unless your oil has been found in Dallas where proper roads supposedly exist).
An experienced supply planner might anticipate what is coming and while sending in the Unimogs already prepare the gravel foundation, so paving of the road for the fragile cars is just a small step. Or nothing has been done for a while and the healthroad check comes back with a huge bill of material.
You get the gist, now go and tell your own story.


What to do with Save & Replication conflicts

QuickImage Category
When customers start developing new Domino applications, the distributed nature of Domino can pose a stumbling block. Suddenly the unheard of replication conflict crops up and wants to be dealt with. A customer recently asked:
"I need to check with you about the Conflict Handling in Lotus Notes application. Default I will set the Conflict Handling to Create Conflicts, but I found my application have create more and more replication or save conflict documents. What can I do for all these replication or save conflict documents, and I found some information in conflict documents is not in original document? How can I prevent the system to generate conflict document?"
Replication Conflict Handling
Before going into details, lets have a closer look how Notes handles its data. There's quite some hierarchy involved:
  1. to replicate two databases need to have the same replica id. The replica id is created when a database is created and only can be changed using the C API (or a wrapper around it). When a NSF is copied on the file system, you actually create a replica (but you wouldn't do that, would you?
  2. Inside a database 2 documents need to have the same document unique id (UNID), which is created from a time stamp at document creation time. The UNID is actually read/write in LotusScript and Java and a certain gentleman can teach you about creative abuse of this capability. In addition in the document properties a sequence number is stored that gets incremented when a document is changed. Together with the last modification date this forms the patented Notes replication.
  3. Inside the document the Notes items are stored. This are not just field values in a schema (like in an RDBMS) but little treasure troves of information. An item has a name, an array of values, a data type, an actual length and a sequence number. Notes can (and does) use this sequence number to see what items have been altered (note the difference: a form contains fields, a document contains items)
So how do the form options behave for conflicts (which are stored as $ConflictAction item in the document)? First Notes determines a "winner" and a "loser" document. The winner is the most edited document. If there same amount of edits, only then the document saved last wins (savour this: an older document still can be a winner). Once the winner is determined, the conflict resolution is executed:
  • Create conflicts (no $ConflictAction item)
    The "loser" document is converted into a response document of the winner document and an item $Conflict is created. The conflicts are shown in in views unless excluded by view selection formula (& !@isAvailable($Conflict)). Conflict resolution is manual (an agent you write is considered manual too)
  • Merge conflicts ($ConflictAction = "1")
    If a document has been edited concurrently but different field have been altered, then they are merged into the one document and no conflict is created. If the same fields are altered a conflict is still generated.
    Sounds good? In practise I often see this fail, when true distributed edits by users are the conflict cause, since applications habitually contain a field "LastEditedBy" with @UserName as formula - a lesson to be learned when designing distributed apps: update only what is really necessary
  • Merge/No Conflicts ($ConflictAction = "3")
    Same as above: if different fields have been altered, then they are merged. If the same fields were altered the loser document is silently discarded. One could argue: why not merge at least the different fields. But that would create rather a data mess
  • No Conflicts ($ConflictAction = "2")
    The radical solution: the winner takes it all, the loser disappears and nobody will ever know. I haven't seen a good use case for that, but the world is big
So what to do about them? First you need to have clarity: They are called "Replication and Save" conflicts. So they can also happen on the same server. Some pointers how to prevent them:
  • Using document locking prevents them when edits happen on the same server
  • Also make sure, that you scheduled agent don't run on two servers concurrently
  • A nice rookie mistake is to use in a querySave (or postSave without closing the form) event - Domino will (or has) saved the document, get out of its way
  • Recheck your application flow: Can you limit the current allowed editor using Author fields/items? Quite often removing access in querySave combined with an agent altering access "on documents changed" makes a lot of sense
  • Check your forms: are there fields that are "computed" and changed by agents? You could/should set them to "computed when composed"
  • Avoid NotesDocument.computeWithForm(..) in agents unless you are sure it isn't the source of your conflict
  • If your business logic mandates that you have multiple concurrent edits, consider to implement the inversion of logging pattern (with XPages you can make that real snappy)
  • last not least: replicate more often or consider to cluster your servers
As usual YMMV


Avoiding login prompts in mobile approvals

QuickImage Category   
A customer posted an interesting question: "We send eMail notifications in our workflow applications. Our users don't want to be password prompted when following that link from their mobile devices. What are my options?".
While the Notes client can handle automatic authentication (especially with embedded experiences), in iNotes LTPA has logged you in and on PC platforms Single SignOn is well established, mobile device are trickier.
The "big" solutions would entail some form of Mobile Device Management (MDM), but that's nothing you want to deploy just for one app in question. You do want to plan MDM, but that's a story for another time (IBM recommends to do that in the context of an overall Endpoint management plan).
I see different possible approaches to get around the password prompt:
  • Use a VPN:
    A good VPN server can communicate with the reverse proxy and provide an LTPA token automatically. Sample code is available for Big-IP F5 and LotusIBM Mobile Connect. Implementing it for other VPN/Reverse Proxy combinations should be possible - check out Puakma SSO and talk to webWise
  • Use X509:
    After you deploy X509 certificates onto Android or iOS you can set the Domino Internet site document for your application to require X509 authentication. Since the certs are deployed on the device no additional prompt is required (of course that depends on how you secured the certs)
  • Go native:
    In a native (or almost native) application you can locally store the access credentials. You read/write data via JSON and https calls. Not too far off: use OAuth to authorise your mobile app.
  • Update (thx Mark, Per): Use OpenNTF's "Auto Login" project
I like the approach using a VPN with LTPA generation best since it saves you the trouble of managing the X509 certificates and adds a security layer on top
As usual YMMV


Now that you can have embedded experiences in Notes, you need to send them

QuickImage Category   
Courtesy of Apache Shindig IBM Notes 9.0 (the client) and IBM Domino 9.0 (the server) now can render OpenSocial Embedded Experiences.
While Notes always had the option for custom mail experiences (store form in document, send), the Embedded Experiences allow integration into any application (that supports them).
One of the first things you might want to do is to pimp your existing applications to provide this new experience. It would be much lighter than storing the form and also work with other mail front-ends.
The good news: adding embedded experiences degrades gracefully, so all users not on 9.0 or later won't see a thing. This means you can get started immediately:
  1. listen to Niklas' explanation and get the sample code
  2. create/configure the gadget for your application (that's a separate post, coming soon), check the documentation
  3. create a class that does all the notification sending, classic and EE
To give you a head start for #3 use this.
As ususal YMMV


Managing @Today in view selection formulas

QuickImage Category
Using @Yesterday, @Today, @Now, @Tomorrow in Notes view selection formulas is a bad idea (but you know that). But if your application depends on such a selection? The solution is to update your database design automatically with a static date. There are a few caveats:
  • You must be careful about date formats, since you don't want code to depend on a locale setting. So @Date(2012;12;31) is your save option
  • After updating a view you want to replicate it across all servers to be sure you don't get design conflicts
  • When users use a local replica of your database you want to check the validity of your selection formula in the queryViewOpen event and eventually adjust it there. This would require you control database be available locally (code not shown here)
  • Extra care is needed if you have views with duplicate names in your database
I designed a solution that uses a control database with one form, one view and a bit of LotusScript. First create a form with the following fields: Server, Database, ViewName, SelectionFormula (all text, data input), lastRun (computedWhenComposed with formula lastRun and finally CurrentSelectionFormula, Text, Computed. Use this formula:
ReplaceStringToday := "@Date("+@Text(@Year(@Today))+";"+@Text(@Month(@Today))+";"+@Text(@Day(@Today))+")";
ReplaceStringYesterday := "@Date("+@Text(@Year(@Yesterday))+";"+@Text(@Month(@Yesterday))+";"+@Text(@Day(@Yesterday))+")";
ReplaceStringTomorrow := "@Date("+@Text(@Year(@Tomorrow))+";"+@Text(@Month(@Tomorrow))+";"+@Text(@Day(@Tomorrow))+")";
@ReplaceSubstring(SelectionFormula; "@Today":"@Now":"@Yesterday":"@Tomorrow"; ReplaceStringToday:ReplaceStringToday:ReplaceStringYesterday:ReplaceStringTomorrow)

Everytime that document gets refreshed the field will reflect how the selection formula currently should look like. Then create a view with Server, Database, ViewName -> all 3 columns sorted (ViewName sorting is optional). I called mine ViewsToAdjust. Next step is to populate the documents with views that actually have time related selection formulas. I use this agent for it:
Option Public
Option Declare

Sub Initialize
    Dim s As New NotesSession
    Dim server As String
    Dim dbDir As NotesDbDirectory
    Dim reportDB As NotesDatabase
    Dim db As NotesDatabase
    Set reportDB = s.Currentdatabase
    server = reportDB.Server
    server = InputBox$("Select Server to scan", "Server selection", server)
    If Trim(server) = "" Then
        Exit sub
    End If
    Set dbDir = s.Getdbdirectory(Server)
    Set db = dbDir.Getfirstdatabase(TEMPLATE_CANDIDATE)

    Do Until db Is Nothing
        Call ProcessDB(reportDB, db)
        Set db = dbDir.Getnextdatabase()
End Sub

Sub ProcessDB(reportDB As NotesDatabase, db As NotesDatabase)
    On Error GoTo err_ProcessDB
    If Not db.Isopen Then
        Call db.Open("","")
        If Not db.Isopen Then
            Print "Can't open " & db.Title
            Exit sub
        End If
    End If
    Print "Processing " & db.Title 
    Call CreateViewAdjusterForms(reportDB, db)
    Exit sub   
    Print Error$
    Resume exit_ProcessDB  
End Sub

Sub CreateViewAdjusterForms(reportDB As NotesDatabase, db As NotesDatabase)
    Dim doc As NotesDocument
    Dim v As NotesView
    Dim selectionFormula As String
    ForAll curView In db.Views
        Set v = curView
        SelectionFormula = v.SelectionFormula
        If isCriticalProblem(Formula) Then
            Set doc = reportDB.Createdocument()
            doc.form = "ViewAdjuster"
            doc.server = db.Server
            doc.database = db.Filepath
            doc.viewname = v.Name
            doc.SelectionFormula = selectionFormula
            Call doc.Computewithform(true, false)
            Call doc.Save(true, True)
        End If 
    End ForAll
End Sub

Function isCriticalProblem(Formula As String) As Boolean
    Dim work As String
    work = LCase$(Formula)
    isCriticalProblem = InStr(work,"now") <> 0 Or _
    InStr(work,"today") <> 0 Or _
    InStr(work,"tomorrow") <> 0 Or _
    InStr(work,"yesterday") <> 0
End Function
Once you have the formulas you want to review them if the are really critical and that the revised formula actually will work. Then design a scheduled agent that checks those views. I run it hourly and on all servers (you could use the lastRun date to only check databases that haven't been processed today.


(Browser) Client side XSLT transformations

QuickImage Category
In the beginning there was HTML aehm XML ahem SGML and the world was good (anybody remember the 6150 RT?).
It gave birth to HTML and XML and XSLT to transform between them and the world was good. We would get XML data and render them either client or server side (like my favorite XForms implementation) into HTML.
Only brave ones would do this client side since an XSLT transformation is heavy and requires the mastery of yet another set of set theory based languages (XPath, XSLT).
Luckily JSON arrived and we now have a rich selection of template engines. I like fill.js which leaves the template markers untouched, so it can repeat the template transformations, I even ported it to Java.
But template engines only make sense when you control both ends of the pipe. Quite often data might only be offered in XML/ATOM or its specialization oData. IBM Connections, IBM Forms (and other IBM products), SAP and software from Redmond all provide REST APIs that deliver ATOM or oData formats.
To incorporate that into your application you need a script that retrieves and transforms the data and a XSLT stylesheet. Go read the article and download the Dojo script, I'll wait.
The data I'll transform is the todo list from IBM Connections 4.0 from Lotus IBM Greenhouse (Don't have an account? Sign up today, it's free).
The ATOM stream with all todos lives here: (it will show all todo you can see, not only your personal ones, you would need to add a filter parameter to limit them). And here's the sample stylesheet:
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl=""
   xmlns:atom="" xmlns:app=""
   xmlns:snx="" xmlns:os=""
   xmlns:xhtml="" xmlns:thr=""
    <xsl:template match="/">
       <xsl:apply-templates select="atom:feed" />
    <xsl:template match="atom:feed">
        <div id="todocontent" class="tabContent" style="display:block">
            <h3><xsl:value-of select="atom:title"/> (last update <xsl:value-of select="atom:updated"/>)</h3>
            <xsl:variable name="numentries" select="os:totalResults" />
            <table class="lotusTable lotusClear" border="0" cellspacing="0" cellpadding="0" summary="There are a total of ${numentries} entries">
                    <xsl:apply-templates select="atom:entry" />
    <xsl:template match="atom:entry">
            <td class="lotusFirstCell">
                <xsl:element name="img">
                    <xsl:attribute name="border">0</xsl:attribute>
                    <xsl:attribute name="src"><xsl:value-of select="snx:icon"/></xsl:attribute>
                <xsl:element name="a">
                      <xsl:attribute name="href"><xsl:value-of select="atom:link[@type='text/html']"/></xsl:attribute>
                    <xsl:value-of select="atom:title"/>
                </h4><div class="lotusMeta">Created by <span class="lotusPerson">
                    <xsl:value-of select="atom:author/atom:name"/></span>
                    <span class="lotusDivider" role="separator">|</span>
                    <xsl:value-of select="atom:published"/>
                    <span class="lotusDivider" role="separator">|</span>
                    <span class="lotusTags">Tags: <xsl:apply-templates select="atom:category" /></span>
                <xsl:if test="snx:assignedto">
                    assigned to <xsl:value-of select="snx:assignedto/@name"/>
    <xsl:template match="atom:category">
        <xsl:value-of select="@term"/>,
As usual YMMV


Starting Domino on Linux using UPSTART

QuickImage Category  
When running Domino on a proper platform (AIX, Solaris, Linux) starting and stopping the Domino server was left to customizing a script from a technote or a Redbook's FTP site, as far as official IBM resouces go. Of course the professional source is Daniel Nashed's ultimate Domino startup script. One script to rule them all.
On Linux however the way services are started has changed a while ago. The prefered method (definitely in Ubuntu, but also Fedora, RedHat and OpenSuse as option) is Upstart (there seems to be a push towards systemd, but that's a story for another time).
Upstart allows for a more flexible control and faster boot times of your environment. To configure your Domino on Linux we will use 2 scripts and one configuration file for each instance (inspired by the same approach for node.js).
The first file is /etc/init/domino.conf with the following content:
# Sample job script for domino, experimental - use at your own risk, don't use in production
description 'lotus domino upstart script'
author '@notessensei'

#Stop on shutdown - but no start directive - since it gets started by another script
stop on shutdown

#Instance allows for multiple scripts running
instance "Domino - $NAME"

# Restart if it was crashing, with a limit
respawn limit 5 60

# Will go into the background
expect fork

# Kill timeout 20 sec - to give Domino a shutdown chance
kill timeout 20

# Check for the password file
pre-start script
    . /etc/domino/$NAME.conf
    # Ensure the pwd file is there and has the right owner/access
    if [ ! -f $PWD_LOCATION ]; then
        touch PWD_LOCATION
    chmod 0400 $PWD_LOCATION
end script

# The script to start the server
    . /etc/domino/$NAME.conf
    exec sudo -u $SERVER_USER -c "cd ${DATA_LOCATION}; cat ${PWD_LOCATION}|PWD_LOCATION/server" >> $LOG_TO 2>&1 &
end script

# Run before shutdown - tell Domino to go down peacefully
pre-stop script
     . /etc/domino/$NAME.conf
    exec sudo -u $SERVER_USER -c "cd ${DATA_LOCATION}; /opt/ibm/lotus/bin/server -q"
end script

# Make sure it is really dead
post-stop script
    . /etc/domino/$NAME.conf
    exec sudo -u $SERVER_USER -c "cd ${DATA_LOCATION}; /opt/ibm/lotus/bin/nsd -kill"
end script
Secondly you create the configuration file in /etc/domino/server1.conf (you need to create the directory if needed, it isn't there by default):
#Configuration variables for Domino instance startup
#User and group for Domino
#Where does the data go
#Must exist and have 0400 doe SERVER_USER:SERVER_GROUP
#Log file
LOG_TO="${DATA LOCATION}/domino.log"
The script will be able to start the domino instance using start domino NAME=server1. For additional instances (partitioned servers) you only need to create an additional conf file in /etc/domino.
The final missing piece is the script that starts all the configured instances. Here we can more or less copy Ian's node script as /etc/init/alldomino.conf:
description 'Start all domino instances at boot'
author '@notessensei'

start on (local-filesystems and net-device-up)


  for file in `ls /etc/domino/*.conf` ; do
    filename=`basename ${file%.*}`
    start domino NAME=$filename
end script
That's all you need. As usual YMMV.


Creating Notes 8.5.3++ plug-ins with Eclipse 4.2

QuickImage Category
One skill that entitles you to the secret handshake is ability to develop plug-ins for the Lotus Notes clients. Sadly that is one of the technologies that held great promises (client side mashups anyone), that was clobbered by being to complicated, buggy and the rise of the "mobile first" mantra.
Still the Plug-in Jedi are with us and there are a number of wildly successful, useful and feature rich plug-ins that were contributed from outside IBM. Of course special mention goes to The Master's work.
One big stumbling block for development is the Expeditor Toolkit which is stuck on Eclipse 3.4.2 (while the current version of Eclipse is 4.2). Using the toolkit it is just a few clicks to create a runtime/debug configuration to test your plug-ins, without you are in for parameter guessing.
When searching for information you will find Mikkel's instructions for 3.5/8.5.2 and the entry in the Designer Help file (for the later one you need to know exactly what you are looking for). But both instructions won't work for Notes 8.5.3 ot 8.5.4. With the help of special friends I figured it out:
  • Install Eclipse 4.2 (Classic will do)
  • In Window - Preferences - Java - Installed JRE add the Notes JRE (I called mine Notes854)
  • In Window - Preferences - Plug-in Development - Target Platform add an new entry (based on an empty template) and add the directory location /opt/ibm/lotus/notes/framework/rcp/eclipse and /opt/ibm/lotus/notes/framework/shared/eclipse (adjust the directories to your path)
  • In Run - Run Configurations create a new Eclipse Application. Give it a name, in my example it is "SmartFile"
  • Leave the Workspace Data Location at its default ${workspace_loc}/../runtime-SmartFile
  • Run a product and point to the Runtime JRE you just configured (Notes854)
  • In Arguments (second tab) enter for the Program arguments (in one line):
    -personality -console -pluginCustomization "${rcp_target}/../plugin_customization.ini"
  • in the VM argument enter the following. There seems to be no more need to specify variables or an installid
    "${rcp_base}/" -Xss512K "${rcp_data}" "-Drcp.home=${rcp_target}/../.."

    (I don't know why some of the arguments are in quotes). Keep the working directory as default
  • In Plug-ins (3rd Tab): check all Target Platform plug-ins as well as your shiny new ones
  • In Configuration (4th tab) leave the "Use default location" On my machine:
    and "Use an existing config.ini": ${rcp_base}/config.ini
  • I didn't touch Tabs 5-7
  • On my machine the Notes executable is on the path, but I don't know if that is mandatory
That's it. While you are on it, take a little refresher (keep in mind your target platform is still Eclipse 3.4.2)
As usual YMMV


The 2,147,483,648 NoteId limit

QuickImage Category
Every Note in a Lotus Notes database has 2 identifiers: a 32Byte Hex Universalid (UNID) and a NoteId (actually there are some more). The UNID is assigned once, never changes (unless you force it), is derived from a timestamp and stays unique across all replicas.
The 128Bit (incidentially the same size as IPv6) are divided in the first 64Bit from the replicaID and the second 64Bit for the individual documents (but you could overwrite that). Normal use gives you 264 possible values ( = 1.819 = 1,800 quintillion) for documents.
The Noteid on the other hand is unique only to one given database and changes when you create a new replica or run compact -c. For performance reasons (Notes is around for a while) this is a 4 byte Hex number (where only even numbers are used). For backward compatibility in the API that hasn't changed yet.
So you have 231 = 2,147,483,648 NoteIds at your disposal. A NoteId is assigned when a Note is saved in a database (could be design or data) and never reused even after a document is deleted and the resulting deleting stub expired and is removed.
When you have a very busy (large) database where many documents are constantly created and deleted you might hit that ceiling, even when the document count and database size seem moderate. The error condition is documented in Technote swg21220384 (the error can also occur in normal operation).
To my best knowledge you can't setup a DDM probe (or any other easy admin tool) to monitor "NoteId exhaustion". A suitable preventive remedy is to schedule regular compact -c for your busy databases.
Be careful with that, since the dose makes the poison and running that task every day will compound the side effects. If your database does need a daily compact, you have a structural problem in your application - come and see me. Typical intervals are weekly or monthly. Smart admins spread them out (so some compact is running every day). The side effect for compact -c on Windows (applied to SAN too) is disk fragmentation. So make sure you take care of that.


XSS Vulnerabilities in Domino

QuickImage Category
An IBM Technote updated on 15 Aug 2012 points to a set of XSS vulnerabilities in the Lotus Domino server. You also can read the disclosure on about that. But first go to your server configuration document and add:
Welcome back (You don't edit the notes.ini directly don't you!). When looking at XSS vulnerabilities, they follow the same pattern as SQL injections: Input that has been provided by the user is not fully sanitised and used for output generation. In web applications the "usual suspects" for such attacks are:
  • Framesets
  • URL parameters
  • Error and redirection pages
  • Form submissions
Finding all those is quite a task for both the developers and the attackers since URLs can be encoded in many different ways (@URLDecode is your friend). Luckily (for the first) and unfortunately (for the later) there is help available. Poking around in Domino made me add a new server rules (Update thx to Sven to point that out) (not needed on Domino 8.5.4++):
  • Type of rule: HTTP response headers
  • Incoming URL pattern: */xsp/.ibmxspres/*
  • HTTP response codes: 404
  • Expires header: Don't add header
  • Custom header: Content-Type : text/plain (overwrite)
  • Type of rule: HTTP response headers
  • Incoming URL pattern: */xsp/.ibmmodres/*
  • HTTP response codes: 404
  • Expires header: Don't add header
  • Custom header: Content-Type : text/plain (overwrite)
Of course a server rule requires that you use the "Internet sites" configuration - since that configuration option was introduced in R6.0 it is high time you use them.
As usual YMMV


Identifying platform dependent code in your Domino application

QuickImage Category  
Domino runs on many platforms, so you have the freedom of choice what to use. My personal favorite currently is zLinux 64Bit, the mainframe I/O capabilites are delightful. However you might have limited your choices by the way you developed your applications. Luckily LotusScript is case insensitive on all platforms, so you only have to pay attention to pieces of code that interact with the world around you. In specific you need to watch out for:
  • DECLARE: where in LotusScript you refer to OS level DLLs that most likely won't be available on other platforms
  • Execute: There could be anything inside, you don't know
  • Evaluate (and @Eval): Again there could be anything inside
  • SHELL: Executing OS level calls
  • Anything that reads/writes a file: NotesStream, Standard file I/O etc.
  • @DBCommand (and the other @DB functions): When using ODBC that driver must be on the new platform. @DBCommand also allows you to actually call a DLL (a very little known extension point) when you use a keyword other than "Domino" or "ODBC". That DLL needs to be available on the target platform
  • All statements that need attention: ActivateApp, ChDir, ChDrive, CreateObject, CurDir, CurDir$, CurDrive, CurDrive$, Date, Date$, Declare, Dir, Dir$, FileLen, Len, LenB, LenBP, LOF, GetFileAttr, GetObject, Input #, Input, Input$, InputB, InputB$, Line Input, Print, Write #, IsObject, IsUnknown, Open, Lock, Unlock, SendKeys, SetFileAttr, Shell, Time, Time$
Now when you are tasked with evaluating a server move to a different platform, you can use DXLMagic to get an idea how many problem areas you might deal with. These are the steps:
  1. Export your databases using the GUI:
    Export NSF
    Make sure you checked the option "Create metric files"
  2. In your target directory you will find 2 files and The first one determines what goes into the CSV file, the second one generates additional tags based on code fragments. You can edit both files and run the extraction from the command line (much faster than the initial export) to get the "how much to pay attention to" report. Edit the and change it to the following content
    #***** Platform investigation ******
    Declare\ =C_CALL
    Input\ \#=FILE_IO
    Line\ Input=FILE_IO
    Write\ \#=FILE_IO
    as\ notesui=UI_script
    source\ as=UI_event

    Then edit the file and add the new tags (you can keep the existing ones if you want):

    (you might want to skip the PRINT statement since it is often used in web agents and there nothing is special)
  3. Run DXLMagic from the Command Line to extract and document based on the new property files. The result will be a new CSV file with extra columns
    java [DirectoryWithTheDXL]
    java [DirectoryWithTheDXL] CsvFileName
Just to be clear: this steps show you how many instances are there, but not where or if changes are required. Most likely all PRINT and @db commands will be just fine. But it is a fast method to get a first impression. Based on the DXL you could devise an XSLT report to show the problem spots in detail.
As usual YMMV


Fixing Domino's LDAP

QuickImage Category
Domino's LDAP needs some fixing before it can be used as fully standard compliant LDAP, e.g. for Linux authentication. Alan Bell decribed the procedure long ago, but no action was taken by IBM/Lotus. So Nathan stepped forward and published a project on OpenNTF.
Unfortunately the template contained modifications of IBM copyrighted code (other than the mail and application templates the Domino Directory template never was published under an Apache 2.0 license), so the project had to be taken down. I had a look at it and used DXLMagic to run a comparison that revealed only modest changes:

XMLComparison: pubnames.ntf.dxl to DemoDirectory.nsf.dxl

Modified ( 86 changes)
form " (PublicDirectoryProfile) " 37 changes ( A57A396D2617685D852565D300812356 )
outline " (AllViews) " 30 changes ( 8BD254C7A4FBCA6B85256A450072C65D )
subform " $GroupExtensibleSchema " 4 changes ( D3095315B1612EC2852565D7005C620E )
subform " $PersonExtensibleSchema " 5 changes ( D64258C1970DE85A852565D70058B520 )
view " ($LDAPHier) " 5 changes ( E72D0DA8994BDCB08525668E007FC98E )
view " ($LDAPRDNHier) " 5 changes ( 0E315EB2B26A4532852567DD007187B4 )
Added ( 4 additions)
subform " DominoDirectoryProfileAddin " ( 1FB319E88A4DFA0C48257A320049FCA3 )
subform " LDAPGroupExtensions " ( E57DA00E4BFFE3D648257A320049FCA4 )
subform " LDAPPersonExtensions " ( C479022EFB0069E748257A320049FCA5 )
view " ($IDNumbers) " ( 9864DF762EC0FA9648257A3200499A64 )
Quite some of that changes are subtle alteration of the pardef settings - which are 100% irrelevant to our task (see the detailed report). The main challenge here are the changes inside the original IBM design elements. Altering a design is one of the DXLMagic capabilities. So without publishing IBM © code it can inject the neccessary changes.


How does an ideal Notes Client deployment look like?

QuickImage Category   
In recent customer discussions the question popped up: "How does the ideal Notes Client deployment look like". The intention of the question was less around the technical aspects, but the user experience. Technically you would have a shared install, a good widget catalog, some sensible policies, automatic login and continious defragmentation in place.
The question is: what should the user see?
There are components in Greenhouse and on OpenNTF. You also can roll your own from mobile websites or Java Code. You want to understand some RegEx. Don't expect your users to do that.
Chris Miller published his top 10, here comes mine. I distinguish between "official IBM", "community supported" and "custom stuff around LiveText". Here you go:
  • Official IBM Plug-ins

    • The build in Sametime client. While I have little use for the Buddy list, it provides the presence awareness for all Notes applications
    • The build in Activity plug-in. It allows me to keep my IBM Connections Activities offline, make sure to install them
    • The Connections Files plug-in: Seamlessly work with your files from connections, drag & drop files and links to files
    • The Notes learning widget: Have learning information at your fingertips
  • Community provided

    • Wildfire: Update all your status information in Connections, Twitter, Facebook, Sametime etc... from one convenient location
    • Bob Balfe's Attachment viewer: Preview attachments in the sidebar
    • The file navigator: Provides access to the file system, drag and drop like the Connections plug-in, but with local files
    • Discussion Add-on: One of my favorite "hidden gems". Copy any document (presumably eMails) with one click into a discussion database. Mini tool to enable sharing information with a team
    • Sender analytics: Find out more about an eMail sender (right click action)
    • Snippets: Single store for text snippets, attachments etc. Invaluable tool if you repeat with repeated information to different people
    • Connections alerts plugin: Stay in touch with the happenings on Connections while working in your Notes client
    • IBM community newsletter generator plug-in for IBM Connections: Generates newsletters and other interactions for your IBM Connections communities. Inbox still rules for notifications
    Not mentioned here: Sametime SUT, Softphone or Headset management - useful but vendor specific. Add them if you have them!
  • Custom configuration & 3rd Party

    The challenge here: find identifiable patterns that allow you to link text to external actions. Here are some examples:
    • \#([A-Za-z0-9_-]+)(?![A-Za-z0-9_\]-]): identifies Hashtags and provides the value with and without hashtag to search bookmarks or twitter or [insert-whatever-here]
    • \b([A-Z]{3}) ([0-9.,]+)\b: Match a currency (3 letter currency code, number behind)
    • /\b(1Z ?[0-9A-Z]{3} ?[0-9A-Z]{3} ?[0-9A-Z]{2} ?[0-9A-Z]{4} ?[0-9A-Z]{3} ?[0-9A-Z]|[\dT]\d\d\d ?\d\d\d\d ?\d\d\d)\b/i: Tracks UPS shipping numbers
    • TripIt: for the frequent traveller (add it as url based widget with authentication)
What am I missing here?
Is is quite some work to configure all of this, but is is worth the effort.
As usual YMMY


Fun with TCPMon and Lotus Traveler

QuickImage Category
Martin Luther (the original) famously stated "man muss dem Volk aufs Maul schauen" (roughly translated: "you have to watch how people talk". What worked for a bible translation also works in IT.
While it is nice to have some documentation, quite often it is inaccurate, incomplete, outdated, factual wrong or simply missing. So listening to "how applications speak" is an essential skill, mentioned here before. In this little weekend project these skills get applied to Lotus Traveler. These are our ingredients:
  1. Domino server with installed Traveler listening on http Port 80 (https won't do). Also make sure that compression is switched off in the internet site document for traveler, it's no fun to watch gzip encoded data
  2. Apache TCPMon installed and listening on port 8888 in proxy mode
  3. The Android SDK, installed with an Android device emulator. Alternative: a physical Android device (and your ports open on your machine)
We will configure the Android to use the PC as the proxy and then install Lotus Traveler and sync data. TCPMon will show us what's on the wire (Windows user could use Fiddler as alternative to TCPMon. Let's get started:
  1. TCPMon needs to be configured in Proxy mode.
    Setting up of TCPMon
    You can test if that is working by pointing your browser's proxy setting to Note: you need the real IP in the Android phone or emulator. The TCPMon screen then nicely shows the outgoing requests and incoming responses:
    TCPMon in action
  2. Time to configure an Android emulator. After installing the Android SDK you can launch tools/android from the install directory to download and install the various versions of the Android OS Emulations from 1.0 all the way to 4.0.3. In the Tools menu you can manage your AVD (Android Virtual Devices) and configure different images. The emulator comes with the option of snapshots, quickstart, cold boot etc.
    Various Android Virtual Devices
    I found the 4.0.3 devices substantially slower than the 2.3.x devices, which might be based on the higher default screen resolution. Once you have configured the AVD, you could start it directly from the menu
  3. However starting an AVD from the command line is much more practical, since it allows for a lot of parameters. Most notably the ability to cold-boot and the ability to define a proxy or even dump the whole TCP conversation into a dump file. So I started my AVDs with
    android-sdk-linux/tools/emulator -avd MyPhone -http-proxy and
    android-sdk-linux/tools/emulator -avd IcecreamPhone -http-proxy resulting in
    Android 4.0.3 running
  4. Along the way I encountered a few odities: some of the commands would ignore the proxy parameter and try to reach out directly. I got that sorted out by configuring the device to use the proxy in the access point settings.
    Proxy settings in Android
    (Don't forget to use Menu - save when changing this). Important here is to clear out username, password and server which have a one character value. For a reason beyond my comprehension the initial configuration of traveler would not work with any proxy setting - could be the TCPMon here, with Fiddler it might work - so after downloading the installer, run the installer and download traveler I had to switch off both proxy settings for the initial configuration. The AMD can be just terminated and then called without the proxy parameter and it would resume where it left off
  5. Make sure to have the proxy back in the HTTP stream to watch what is happening. It is quite enlightening
  6. The first thing once the new device is ready is a request to fetch the configuration which against the current fashion is delivered as XML document that very closely resembles DXL. The request looks like this:
    GET http://yourtravelerserver:80/servlet/traveler?action=getConfig&deviceId=Android_somedeviceid HTTP/1.1
    Content-Type: application/x-www-form-urlencoded
    Host: yourtravelerserver:80
    Connection: Keep-Alive
    User-Agent: Lotus Traveler Android
    Authorization: Basic Base64EncodedUserNamePassword=
  7. Then you can go on and watch the initial sync happening. To understand what device and server are talking to each other, you need to ask OMA, there the best documentation is available - you might have guessed: Traveler on Android is using SyncML. You can learn how "push" actually works
Next stop: repeat the above on a Mac. On the Mac Traveler uses ActiveSync, so you need to look elsewhere for the documentation. I'll share more about the protocols and findings in future posts (just hope it is raining again on a weekend here).


Extracting data from Domino into PDF using XSLT and XSL:FO (Part 4)

QuickImage Category  
This entry is part of the series Domino, XSL:FO and XSLT that dives into the use of XSLT and XSL:FO in IBM Lotus Domino.

So far we had a look at the all over process, some Java to convert FO into PDF and FO Basics. Time to get some Notes data exported. To make the task easier I created a little helper that allows convenient export into XML. Currently it works on a NotesDocument(Collection) basis, but it would be a small step to add a method that uses a view navigator for speed.
The most complete rendering of a NotesDocument is done using the .renderXML method. For a collection you use a NotesDxlExporter. Unfortunatelty both methods are comparable slow. So I added an alternate approach (works only if you don't have RichText for the moment) and export lean XML.
A Form2XMLDefinition class (optional) allows to pick which fields need to be picked in the XML file. It also allows to group those fields (more on that another time - or look at the source). So the methods are:
package com.notessensei.fop;

import lotus.domino.DocumentCollection;
import lotus.domino.Session;

public interface Notes2XML {
    public abstract void addForm(Form2XMLDefinition newForm);
    public abstract ByteArrayOutputStream renderDocument2DXL(lotus.domino.Document doc);
    public abstract ByteArrayOutputStream renderDocument2XML(lotus.domino.Document doc);
    public abstract ByteArrayOutputStream renderDocumentCollection2DXL(Session s, DocumentCollection dc);
    public abstract ByteArrayOutputStream renderDocumentCollection2XML(DocumentCollection dc, String rootName);
I expanded the PDFReport class with a convenience method: getNotesXMLExporter() to be able to reuse my managed bean (beginnings of a Facade pattern).


Extracting data from Domino into PDF using XSLT and XSL:FO (Part 3)

QuickImage Category  
This entry is part of the series Domino, XSL:FO and XSLT that dives into the use of XSLT and XSL:FO in IBM Lotus Domino.

In Part 2 I introduced the Java code required to output PDF from XSL:FO and how that code can be called from an XAgent. Now lets have a look at XSL:FO itself. It is a W3C defined standard to layout documents. You could consider it as competitor to PostScript or PDF. XSL:FO contains both layout instructions and content. Since it is expressed entirely in XML, it is easy to manipulate (follow me for a moment and accept XML is easy) and - more importantly easy to split content and layout for reuse. Typically a complete XSL:FO document would be only an intermediate step in PDF production. The report design (without data) would be contained in an XSLT stylesheet that gets merged with XML data. You could consider XSLT the "templating language" of XSL:FO.
A XSL:FO document has a single <fo:root> element. This contains one or more page-sequence elements, that contain the actual content and a layout-master-set, that defines the pages.
XSL:FO layout-master-set
Besides the page size (and content orientation) a simple-page-master defines header, footer, left and right column (called regions). You need to get your math right there. The margin and the regions are both substracted from the page size to compute the real margins. When you have a margin="1cm" and a region-start with 3cm width, then the left margin is 4cm. Read up the XSL:FO tutorial on for more details.
The main element region-body allows to specify a column-count attribute that will create multi-column page layouts without the need for a table and a manual calculation of column content. You also could define alternating page masters, like left and right pages or different pages for a chapter beginning - read details in the repeatable-page-master-alternative specification.
Your main content is contained in one or more page-sequences.
The page sequence contains the content. Don't get confused: a page-sequence represents content of n number of pages (n >=1), not just one page. You need more than one page sequence only when you want the page layout to use a different master/style (of course using the alternatives mechanism described above you can achieve alternate styles inside a single page sequence). The page sequence contains one or more flows. A flow is targeted at a region (there are 5 of them) and contains block elements (think HTML div,p,table etc.) that contain the content. There are a huge number of specialised attributes and elements (stuff like watermarks or graphics) available you can learn about in the specifications.
In practise you write a sample XSL:FO document and transform it into PDF. Once you are satisfied with the results, you then convert the XSL:FO document into an XSLT stylesheet. This is easier than it sounds, you simply wrap xsl:template tags around your fo and replace your sample content with xsl:apply-templates statements. w3schools has a simple example. Of course XSLT is a interesting topic on its own, go and read the XSLT 2.0 and XPath 2.0 Programmer's Reference (also available on Kindle).
Next stop: How to pull Notes data into XML for processing.


Extracting data from Domino into PDF using XSLT and XSL:FO (Part 2)

QuickImage Category  
This entry is part of the series Domino, XSL:FO and XSLT that dives into the use of XSLT and XSL:FO in IBM Lotus Domino.

In Part 1 I discussed the process of getting data from Domino to PDF using only open standards. In this part I want to start backwards: If I have valid XSL:FO, how do I get the PDF output? In a later installment I will discuss what XSL:FO is capable of and how to create it using XSLT. Finally I will discuss how to extract XML from your data source (Domino or others).
I choose the backwards approach since it is easier to understand walking from the desired output towards a stylesheet than creating a stylesheet if the result isn't clear. Bear with me for this approach.
I will use Apache FOP as my rendering engine. The Quickstart compresses the needed steps neatly into 1-2-3 (pun intended):
  1. Download
    This is the easy part. Pick one of the mirror servers and get (24M) - check if you read this later if there is a newer version. Extract it to a directory of your choice, we will import/copy what we need from there
  2. Configure
    You have choices:
    • copy the Fop files into jvm/lib/ext (bad idea)
    • import it into an NSF (twice: once for agents, once for XPages)
    • create a plug-in (good for sidebar and XPages, not good for agents)
    Having coined the term XAgent I will stick to the XPages version with import into the NSF. Time permitting I'll add a plug-in approach to this little series.
  3. Run
    The FOP website provides a good overview on general Java use as well as servlet specifics. Using that information as template it is not too hard to implement a managed bean that takes a XMLDocument and an optional Stylesheet and returns the rendered PDF as result
Since our output will be rendered by a managed bean, we need to configure it in the faces-config.xml:
The XAgent follows the usual pattern:
var exCon = facesContext.getExternalContext();
var response = exCon.getResponse();
var out = response.getOutputStream();
response.setHeader("Content-disposition","inline; filename=result.pdf");
response.setHeader("Cache-Control", "no-cache");
// In this example the text in sessionScope.longText will be rendered
var written = sessionScope.longText != "" ? "<text>"+sessionScope.longText+"</text>" : null;
// Writes the default rendering out
// Stop the page from further processing;
To get the Java class working I needed to import:
  1. avalon-framework-4.2.0.jar
  2. batik-all-1.7.jar
  3. commons-io-1.3.1.jar
  4. fop.jar
  5. serializer-2.7.0.jar
  6. xmlgraphics-commons-1.4.jar
The class for the PDFReport contains just some wrapper code to render "FO" or "XML with XSLT" into a PDF. It is demo code, you want to add more robust error handling. Next stop: more about FO
As usual YMMV.


LotusScript LSI_INFO potentially harmful on (64Bit) Domino

QuickImage Category
There is a handy, but undocumented LotusScript function LSI_INFO that can lead to crashes of the Domino server, especially on 64Bit systems. Since the function never had been documented, a fix for that can't be expected any time soon (or at all). Unfortunately that function is used in OpenLog an incredibly useful logging application written by Julian Robichaux. Here is how to fix this:
  1. Make sure you use the latest release (you also can use the code contained in TaskJam, it is extended to be able to log XPages)
  2. Edit the OpenLogFunctions script library and add into the initialize statement: NoLSIStackTrace = True
  3. Make sure all applications that use OpenLog get the updates library (if you copied it with intact inheritance a "load design" will do)
As usual YMMV


Teaching an ol' dog a new trick: SET CONFIG param=value update

QuickImage Category
One of the cardinal sins of Domino administration (besides FTP/copy of a server NSF) is the manual editing of notes.ini variables. To update an notes.ini variable instantly you can use the server console and type set config param=value, while for permanent changes you would add the value to a server configuration document. Using the server configuration document has the advantage, that your ini changes not only will survive a reboot or even loss of your notes.ini, but they serve as documentation too.
My colleague Thomas Hampel today pointed me to a technote that I wasn't aware of. It explains how to combine these two steps into one:
set config param=value update will instantly activate a parameter and wite it back into the server configuration document. For lazy or forgetful administrators you can issue this command once: set config ENABLE_SRVCFG_NAB_UPDATE=1 update and it will automatically update any parameter into the configuration document when you change it through a set command. Just keep in mind: don't edit that notes.ini on the OS level:
I will never edit a notes.ini on the OS level
(Courtesy of the Bart Simpson Generator)


Extracting data from Domino into PDF using XSLT and XSL:FO (Part 1)

QuickImage Category  
This entry is part of the series Domino, XSL:FO and XSLT that dives into the use of XSLT and XSL:FO in IBM Lotus Domino.

We all know "Notes doesn't print". Nevertheless the topic of document output and reports is not going away, even if I'd like to ban the reports. There are plenty of ready made tools, but today I'd like to start with home cooked reporting.
Why the effort? Using only tools that use open standards you gain more control over the whole process and you can use whatever deems fit. The downside: it is more things you need to know and might not be suitable for business users (but its great to torture interns). In the long run you have a portfolio of source transformations that you can combine potentially faster than any reporting tool. The general principle is "Extract Transform Render":
Extract Transform Render
  1. Extract:
    Whatever will pull out the XML for the second step will do the trick. For list type of rendering ?ReadViewEntries will do the trick or simple DXL exports. Quite often you might opt for some bespoke code to extract code with an eye of a fast and/or easy transformation phase. You also might consider to extract your data in conformance with an established international standard
  2. Transform:
    This step usually takes the XML from the extract phase and runs it through one or more XSLT transformations. XSLT is kind of IT Black Magic (other say it's just set theory) and can use quite some computing power. For high performance the pros use a dedicated applicance. Once you get the heck of XPath you can do some amazing reporting (e.g. "give me all sales guys where withing the last 5 sales of the 3 guys next to his ranking there was a carpenter")
  3. Render:
    Rendering is easy. The outcome of the transformation step will be XSL:FO which is a page description language. Use a free renderer or a commercial offering and a few lines of code. The output typically is a PDF file, but you can target graphic formats too.


Importing EML files into Notes (lots of them)

QuickImage Category
My friend and Lotus Champion Weerapong Panpai from Zenith in Thailand asked me "How difficult is it to do a bulk import of eml files into Lotus Notes"? MIME as a format is plain ASCII, so on first look it seems simple.
However on a closer look you might guess where the author of an outstanding movie got his inspiration from. RFC 2045 nicely states:"NOTE: The previous four definitions are clearly circular. This is unavoidable, since the overall structure of a MIME message is indeed recursive." So trying to parse that messages on my own was clearly out of the question. Luckily there is ready code available. On one side there is JavaEE and the the MimeMessage and on the other Apache James. An implementation of the javax.mail.internet classes is hidden somewhere in the Notes client, so that would favour MimeMessage. However I found it easier to work with the mime4j classes. Here is what I came up with:
package com.notessensei.mimeimport;


import lotus.domino.Document;
import lotus.domino.NotesException;
import lotus.domino.Session;

import org.apache.james.mime4j.MimeException;
import org.apache.james.mime4j.parser.ContentHandler;
import org.apache.james.mime4j.parser.MimeStreamParser;

public class Mime2Doc {
    public void importMail(Session s, InputStream in, Document doc) throws NotesException, MimeException, IOException {
        doc.replaceItemValue("Form", "Memo");
        MimeStreamParser parser = new MimeStreamParser();
        ContentHandler h = new DocContentHandler(s, doc);
The bulk of the work happens inside the ContentHandler. Using a Stack (that works LIFO) I can track Mime entries inside mime entries inside mime entries. Still the code is just below 100 lines.
The final piece in the puzzle is the class that helps to track the MIME Parts that are put onto the Stack and returns MIME-Type and encoding derived from the header fields. You are ready to test.


Wipe them out - all of them (when a design refresh doesn't work)

QuickImage Category
You created that shiney new version of your application, ran (in no particular order) fixup, compact and updall and replaced the design. Still some of the design elements haven't updated. Typically that happens when no attention was paid to the "prohibit design refresh" flag of the database design (all your predecessor's fault of course).
While you can go in and hunt the settings down one-by-one (tip: use an DXL export and just look for the property), it is sometimes faster and more profound to strip the offending database(s) from their design entirely before applying the "Replace Design" action.
To do that I use a script in my little universal toolbox. A word of caution:
  • Use it at your own risk
  • Backup the database beforehand
  • Check in Domino Designer's "Navigator view" (the Eclipse panel, not a Notes view) that they are all gone
  • You have been warned
As usual YMMV


DOTS and the eMail life cycle

QuickImage Category
To use the words of my friend Mikkel: "Domino OSGi Tasklet Container (or DOTS for short) is an uber-cool OpenNTF project that allows you to write addins for the Domino server in Java. The project used to be called JAVADDIN which kind of gives the purpose away.". Together with a simple mail rule DOTS is your entry ticket in a complete eMail life cycle management solution. Despite eMail being around for a very long time, the concept of eMail life cycle management seems to be very alien to a lot of IT managers (and it definitely sounds less exiting than Cloud computing strategy). Let's have a quick look:
  • All eMails are created equal: someone wants to communicate something. The eMail leaves the sender's mail application and hits the mail server.
  • The eMail is subjected to more or less technical scrutiny: is it small enough, is there no virus, is routing to a next hop possible (and where to)?
  • The eMail is subjected to business decisions: can that user send (e.g. leak prevention, destination check etc.), does the eMail need retention, does the eMail need to trigger a business process (approval, alert, associate, alter)?
  • Messages get delivered and now the sent message is a received message
  • The eMail is subjected to more or less technical scrutiny: is it small enough, is there no virus, can messages from this source be accepted, can it be delivered?
  • The eMail is subjected to business decisions: can that user receive, does the eMail need retention, does the eMail need to trigger a business process (alert, associate, alter)?
eMail Life Cycle considerations
While the technical rules are well understood, the business rules typically are not developed deeply, after all "it is just eMail" and not a business rule engine. DOTS allow to "open the door" to business rule processing. The beauty of a DOTS approach: no template needs to be harmed! It will work with all access methods (client, web, POP, Traveler) and for any message (send by user, send by agent) A few steps are necessary:
  1. DOTS needs to have the time to "catch" messages that are deposited into the server's Since the router would get in its way, a mail rule in the server configuration document needs to work on "all documents" and set routing status to "HOLD" (that is stored in the NotesItem "RoutingState")
  2. Write a DOTS tasklet that processes all documents that have been created in the (that's a triggered tasklet, not a scheduled) and remove, once done, the NotesItem "RoutingState". The router will then pickup the message for delivery
  3. Business rules are wide and many, so you might want to use a rules engine. The JSR 94 describes the Java API for a rules engine and the WebSphere ILOG JRules are an implementation of it. You also can consider one of the OpenSource rule engines or adopt Apache Mailets to your needs. Of course for a small logic set coding it in the tasklet will give the best performance
  4. Test, Test, Test!
  5. A Prêt-à-Porter solution, if you want ready rules, is the Group iQ.Suite combining technical and business rules.
As usual YMMV


Lotus Notes Reporting and Exports

QuickImage Category
While I think reports are a thing of the past and should be banned, the question "how to run reports on Notes" is quite popular. Today you rather would create a Dashboard than a classical report, but since it is popular, here you go (in no specific order): As usual YMMV.


Documenting Notes Development

QuickImage Category
We all know: developers are to documentation are like cats to dogs. There are cute pictures of co-existence in the internet but in reality one chases the other in a never ending fight. One of the big reasons: nobody reads documentation until a problem occurs or there is a change of watch, none of which appears in either "important" or "interesting" category of a regular developer. A partial solution for this dilemma is to automate documentation creation. I'm fully aware that a lot of generated documentation is not very useful, so be careful what you wish for.
What I describe in the following is not something readily available, but it would make an excellent plug-in for the Domino Designer (if I can find some brothers in crime). Some of the tools are ready, some need to be created and all of them need to be glued together in an automated process. So this is how it can look like:
  1. The documentation process would be governed by an Apache ANT script. Such a script can be run from within Domino designer and you even could automatically run it whenever you run a build of your application
  2. Ideally the reports would run (makes things easier) on individual files and, for cross design element reports, on a summary file. When connecting an NSF to a version control system, the individual files are created on the file system, so that's sufficient (or you can use Guo Yi's Import/Export plug-in (which incidentially has an ANT interface ) or call DXLMagic from the ANT script (it is like calling a command line Java) with the DesignExtractorSINGLE mode. DXLMagic useful for classic stuff, not for XPages, so you stick to the export tool or the version control for these
  3. Run XSLT based reports on the XML: View selection formula, XPages field validation, Buddy list reports, Hide-When formulas, Replication formulas, Field reports, View structure, XRef documents etc.
  4. Use JavaDoc to document all Java classes you might have used
  5. Use LSDoc to document your LotusScript (how to get all of it into a script library is subject to a future post)
  6. Use JSDoc to document all the JavaScript
  7. Use Visustin to generate flowcharts from all the language source code. With a little tweaking the *Doc generators can be tweaked to include the images of visustin (after all it is just HTML which could be post-processed)
  8. Use Crap4J to check for the quality of the Java used (CRAP stands for: "Change Risk Analysis and Predictions")
  9. Use JSHint to check all the JavaScript (based on some opinion it will work better for our purpose than the original JSLint
  10. The missing piece: LSLint. It would be way cool to have something like that. On the other hand: some code better should not be scrutinized
  11. Finally all results are uploaded into one WIKI (or another), so the WIKI would take care of "what has changed"
What would you add? What XML/XSLT driven reports could be created?


View selection reports

QuickImage Category
In the last entry I described a method to replace view selection formulas wholesale with boolean expressions. One question I got asked was: "Do we need this?". The clear answer is: "It depends". To make an informed decision, you need to look at your view selection formulas. To make that easier I designed a report you can run against your DXL export of your database design (using DXLMagic) that will highlight stuff that needs fixing. Of course beautiy is in the eye of the beholder. Eliminating @Now, @Today, @Tomorrow and @Yesterday is a no-brainer for anyone. Disputed items are @IsResponseDoc (you probably want @AllDescendants) or @Contains, as well as the total length or the number of *OR* conditions. Anyway, the report with a little automation can be part of your standard system documentation.
Sample for Selection Formula Report
The report is done in XSLT and you are free to add your own priorities in the matching formulas.


Brute Force View Tuning

QuickImage Category
Complex view selection formulas can slowdown your Domino performance substantially. So keeping them simple is a mandate for high performance applications. One of my favorite approaches here is to have a set of hidden fields in my forms that compute to true or false and then simply use SELECT vsf_Demo as selection formula, where "vfs" stands for "View Selection Formula". In large applications that results in a rather nice performance. Downside of the approach: you need to compute these values on each alteration of a document (but that's not too hard). You can use this method even to retrofit existing applications to improve performance. These are the steps:
  1. Create a FORM with one field for each view that contains the view selection formula
  2. Change the view selection formula (short of the SELECT @All ones) to point to that field
  3. Alter/Create the QuerySave event to: switch to that form, do a computeWithForm, switch back to the original form
  4. Create a script library with an easy function that takes a document and tells you if the document's selection formulas have been updated
  5. Add a call to that function to all agents
  6. Add an agent that runs on "documents created or changed" to catch any case where the two calls above didn't work (e.g. @Formula agents)
  7. Test, Test, Test
  8. Enjoy better performance
The code you need to add to the querySave event is rather light: while this is the code in the querySave event of any form:
Option Declare
Use "CoreHandlerLibrary"

Sub Querysave(Source As Notesuidocument, Continue As Variant)
    Dim handler As DocumentHandler
    Dim doc As NotesDocument
    Set doc = Source.Document
    Set handler = New DocumentHandler(doc)
     'We recompute here, since we will save anyway we don't care for the result
    Call handler.docHasChangedAfterUpdate()
End Sub
Being lazy and notoriously prone to typos I created some code that does step 1 and 2 for me in one go.
    Class ViewTuner
    Description: Creates a form with all view selection formulas and updates all views

Public Class ViewTuner
    Private ourForm As NotesDOMDocumentNode
    Private formName As String
    Private fieldNamesUsed List As String
    Private resultStream As NotesStream
    Private parser As NotesDOMParser
    Private s As NotesSession
    Public Sub New
        formName = "ViewSelectionFormulas"
        Set me.s = New NotesSession
    End Sub
        Property Set formToUse
        Description: Overwrite the formName if necessary
    %END REM

    Public Property Set formToUse As String
        me.formName = formToUse
    End Property

        Function getEmptyForm
        Description: Get an empty form DOM in DXL
    %END REM

    Private Function getEmptyForm(formName As String)  As NotesDOMDocumentNode
        Dim rawform As NotesStream
        Dim s As New NotesSession
        Set rawForm = s.Createstream()
        Set resultStream = s.Createstream()

        Call rawform.Writetext("<?xml version='1.0' encoding='utf-8'?>")


Workflow and Organisational Directory

QuickImage Category
An interesting question was raised in a recent development discussion: "What is the best way to keep the organisation data of our workflow application(s)?" My first answer was easy: "Definitely not inside the application itself". But then the choices start (too many of them are not good for you): Keep them in the Domino directory, in a custom Notes application, in a RDBMS, pull from HR, pull from LDAP? Each of these approaches do have their merit and their drawbacks. I'll elaborate.
Of course the first question to ask: Should I build my own workflow engine? There are a number of choices readily available. You can start with Ironworkflows on OpenNTF which even provides a graphical workflow editor running in the browser (and full source code included). You could have a look at XFlow, that will gain with some help based on a a great library a graphical editor in due time. It also can work with different engines. If you are not into "complete source provided" type of applications, you can checkout PAVONE Espresso Workflow which runs both on Domino and Websphere and got a nice graphical editor too. The graphical editors seem to be a must in these days, but might only determine a fraction of the usefulness of a workflow engine (I know that statement is a candidate for flame proved underwear).
But back to the initial question: Where to store the organisation data? If you build your own engine I would strongly suggest, regardless of where you store, to abstract workflow logic from storage. Define the API your application will call (like getWFDefinition(...), getApprovers(...), resolveRole(...) etc. What calls would make sense in an engine warrants a post on its own. Then code against the API (this is called contract first programming). This way it doesn't matter if your storage backend changes and you can keep the considerations for storage separate from the considerations for your applications. In Domino such an API would be best implemented using the Extensibility API (that's what XFlow is doing). Now to the storage options:
  • Domino directory (my preference)
    If there wasn't THE ADMIN the Domino directory is an excellent choice. It has ample fields to capture organisational information like department, managers etc. You even have well defined extension points (yes: places where you legally can mess with the directory).
    Big advantage: easy to sync (we give you TDI), available for other applications via LDAP and when a user goes the roles go with him. That is the general idea of a directory service.
    Biggest challenge: the admins jealously guard the directory and won't let you get anywhere near it to customise or update data. Cited reason: "If you mess with the directory, IBM won't support us" and "We can't grant the access rights without risking other parts of the data being messed up". There *is* a slight difference between altering the directory design (=fun with support) and updating the schema. Dealing with the schema (and LDAP) requires to leave the protected yellow bubble for a moment and learn about the finer details of LDAP (e.g. I didn't know that you can't use # for a group name, but & is allowed).
    Open to discussion: how well does Domino Directory serve to capture all the different roles? I've seen very few engines using the directory alone. What I have seen is the Domino directory in use for lookups, but a custom front application for data entry, so the workflow owners can't mess with other data in the directory. Also popular: sync the Domino directory using TDI with an external backend like the HR system
  • Custom Notes application
    That's the most prevalent form of corporate storage for organisation data. The original Prozessware used that approach as do the tools mentioned above. Clear advantage is the flexibility of a solution, since you can tailor it to whatever you fancy without the risks perceived in adding to the Domino directory. Of course you loose the exposure via LDAP, which makes it a solution confined to Domino (unless you add some web services, which would be a good idea) and you need to keep the Domino directory in sync with the org database. What I've seen very often are org directories that synchronise with all sorts of backends: RDBMS, HR Systems, web services etc. This of course opens the eternal question of sync vs. direct access
    As long as you look for a Domino solution there is little reason to entertain a RDBMS (unless of course your architect was hiding under a rock and thinks NoSQL isn't a database). So typically this approach can be found where e.g. HR systems run on different platforms. I'm not a big fan of direct RDBMS access since it ties you down to a rigid structure and, more important, adds another breaking point: if your RDBMS is down your workflow stops, so you need to queue your apps for processing. You can sync the data or at least use LEI/DECS to avoid language rojak
  • Pull from HR
    A good HR system would have a higher level API (probably a web service) that allows to pull org information by your workflow engine. While more business like than the RDBMS approach these APIs are rarer. If your HR system has one - good for you, use it. Again you need to decide sync vs. direct access
  • LDAP
    If your LDAP server isn't plagued with a fragile engine it is a good place to store org data in it. LDAP can be queried by a lot of platforms and is language neutral. In Domino LDAP goes well with XPages (I've seen a LDAP data source somewhere) and Java, but not so much with LotusScript. Of course if you use Domino Directory as (one of your) LDAP engine(s) you get the best of both. A good idea when working LDAP (Domino or others) is to have a sound LDAP front-end to view and enter data. Apache's Directory Studio is a good choice here
I'll cover more thoughts on what the org directory should provide in a later post. As usual YMMV.


Protecting sensitive information in Notes documents

QuickImage Category
Even in the most social businesses there is information that is available only on a "need to know" basis. Sometimes it is only a small subset of the fields in a document. In a Notes client, as long as I can see a document, I can see all the item values in it (using the document properties box), even if the field is hidden in the current form. So what are my options to protect a few sensitive fields?:
  1. Encrypt the fields with an encryption key that only the authorised user have. Problem here: you need to manage these keys and need a different set for each use case - messy. Also you can't show any of these fields in a view
  2. Hide the database design when applying the template. Looks good on the first view, but a semi skilled user can easily bypass that (e.g. copy & paste into an empty database, create a private view, use a @Prompt smart icon, use the free Document viewer or NotesPeek) - security by obscurity never works
  3. Store the sensitive data in a second document that is protected by reader names that are different (more restrictive) than the main document. This approach also keeps (given you have security set) curious admins out, a capability RDBMS is lacking
  4. Change the process and remove the need for the fine grained access control
Let's have a closer look at option 3. Notes allows to pull in values from a different document using @GetDocField. So once we have the secret document, we can pull in these values. If a user can't see that document no value is retrieved. The formula for such a computed for display field would look like this:
tmp := @GetDocField(RiskAssessmentID;"RiskAssessment");
@If(@IsError(tmp);"nothing to retrieve";tmp)
To create such a "secondary document" a few lines of LotusScript in a button make it a seamless experience:
Sub Click(Source As Button)
    Dim doc As NotesDocument
    Dim riskDoc As NotesDocument
    Dim uiDoc As NotesUIDocument
    Dim db As NotesDatabase
    Dim w As New NotesUIWorkspace
    Set uidoc = w.CurrentDocument
    Set doc = uidoc.Document
    If doc.IsNewNote Then
        Call doc.Save(True,True)
    End If
    Set db = doc.ParentDatabase
    If doc.RiskAssessmentID(0) = "" Then
        Set riskDoc = db.CreateDocument
        Call riskDoc.ReplaceItemValue("Form","RAF")
        Set riskDoc = db.GetDocumentByUNID(doc.RiskAssessmentID(0))
        If riskDoc Is Nothing Then
            Set riskDoc = db.CreateDocument
            Call riskDoc.ReplaceItemValue("Form","RAF")        
        End If
    End If
    Call riskDoc.MakeResponse(doc)
    If  w.DialogBox("RAF",True,True,False,False,False,False,"Please provide your Risk Assessment",riskDoc,True) Then
        Call riskDoc.Save(True,True)
        Call doc.ReplaceItemValue("RiskAssessmentID",riskDoc.UniversalID)
        Call doc.Save(True,True)
    End If
    Call uidoc.Refresh
End Sub
To keep the 2 documents in sync we add some code into the PostSave event of the main form:
Sub Postsave(Source As Notesuidocument)
    Dim doc As NotesDocument
    Dim riskDoc As NotesDocument
    Dim db As NotesDatabase
    Set doc = source.Document
    If doc.RiskAssessmentID(0) = "" Then
    End If
    Set db = doc.ParentDatabase
    Set riskDoc = db.GetDocumentByUNID(doc.RiskAssessmentID(0))
    Call riskDoc.ReplaceItemValue("Subject",doc.subject)
    'Repeat with other fields needed in views, for workflow and access control
    Call riskDoc.Save(True,True)
End Sub
We are almost done. The response documents need some handling when we try to open them.


Single category views and tagging

QuickImage Category
Tagging is en vogue and its proponents see it as alternative to the classical folder approach. One alleged advantage of tagging is that you can have multiple tags but only one folder where a document is in. That's true for the file system and certain other mail applications but neither for Lotus Notes nor gMail. In gMail there is actual no differentiation between folders and labels (as they call the tags). What is new - and I like that a lot are better visualisations for folders/labels/tags: The infamous tag cloud and little labels in views and document denoting how it has been tagged.
In Domino showing documents with a certain tag is as easy as having a single category view with the tag field (that would be multi-value) as category. Not so fast. One of the really nice UI features of tagging is the ability to drill down: first show all documents that are tagged by one label, then by two etc. So our single category view needs to show all the combinations. So I gave it a first stab. This is what I came up with (note: the field name for the tags in "Tags" and they are space separated):
  1. @If(Tags="";@Return("#no Tags");"");
  2. trueTags := @Sort(Tags);
  3. results := "";
  4. tagCount := @Elements(trueTags);
  5. @For(n := 1;n < tagCount;n := n + 1;
  6. workElement := trueTags[n];
  7. workTag := @Implode(@Subset(trueTags;n);" ");
  8. workList := @Subset(trueTags;n-tagCount);
  9. set1 := (workElement + " ") *+ workList;
  10. set2 := (workTag + " ") *+ workList;
  11. set3 := @Subset(trueTags;n) *+ (" "+@implode(workList;" "));
  12. results := results : set1 : set2 : set3
  13. );
  14. @Trim(@Unique(trueTags:results:@Implode(trueTags;" ")));
You could sprinkle in some @Lowercase if that makes sense for you.
  • In line 1 I weed out untagged documents. Depending on the business requirements you might want to weed them out in the view selection formula, so you can skip line 1.
  • In line 2 the tag names get sorted, since that looks much better further on.
  • The loop from line 5 to 13 builds different combinations of tags (I wonder if I got them all covered).
    • Line 6 gets the current tag in the loop
    • line 7 the current tag plus as its ancestors as a single string
    • line 8 grabs the rest of the remaining elements
    • In line 9, 10 and 11 the two elements are then combined with each remaining element in the list (the *+)
    • and appended to the result in line 12
  • Line 14 adds the individual elements as well as the concatenation of all of them to the final result
Check it out if it works for you. Ideally the tag cloud would adjust too only showing the possible tags for the existing selection - but that's subject to an XPages post about the tag cloud.
As usual YMMV (waiting for the howling about 40k limits)


Converting Notes Names into Internet Addresses

QuickImage Category
With IBM Connections en route to many Notes shops an interesting problem arose: Many users want to convert their private mailing list into an IBM Connections community (making IBM sales happy, since you need to upgrade to full Connections for that).
I think that is a terrific idea - moving away from occasional mail blast to a rich online collaboration. Unfortunately IBM connections allows the bulk upload of member information only using their Internet addresses, but not the Notes Names that are in the private group documents. The solution is simple:
Go to File - Preferences... - Toolbar Customize and add a new button. In this button add this formula:
result := "";
realresult := "";
@For(n := 1;n <= @Elements(Members);n := n + 1;
look := @Trim(@NameLookup([Exhaustive];Members[n];"InternetAddress"));
realresult := @Trim(@Unique(realresult:look));
result := @Trim(@Unique(result : @If(look="";@Name([Abbreviate];Members[n]);look))));

FIELD comment := @Implode(result;@NewLine);

@Prompt([Ok]; "Members " + @Text(@Elements(Members)); "Found " + @Text(@Elements(realresult))+ " emails for "+@Text(@Elements(Members))+ " Notes Names")
Now open your personal address book (a.k.a Contacts) and select a group in the group view. The button now looks up the internet addresses in the Domino directory for you. It will update the comment field, you you might want to adjust that to your needs. Use the button in the view, not when the document is open. If an eMail is not found (e.g. since it *is* an eMail already, the name will be copied 1:1 into the new list.
Update: When you want to save the list to a file to import them into Connections, you need to add 2 (two) empty lines on top. Otherwise it won't accept the import
As usual YMMV


Filing eMail doesn't make you more productive - really?

QuickImage Category   
IBM research published an interesting study titled Am I wasting my time organizing email? A study of email refinding. It has been picked up in various places and looks like a field day for the "don't bother with organizing anything eMail" crowd. I think the paper is solid research for the question asked and the observations made.
However, I sense that a number of question haven't been asked that might change the producvtivity bottom line in favor of a different approach. I fully agree that hitting search or simply scrolling through a list of messages (often sorted by subject or sender) is a very efficient way to find a specific message. Also having a message thread is invaluable. I used message theads already in R7 when they only displayed when you open a message. So what did the study not look at (to be clear: I'm not saying they should have looked at these, but I'm saying you need to look at thse to gauge all over productivity):
  • Impact on productivity of a full inbox. How much time do I loose to every time I look at the inbox to decide: that's a message I dealt with, this needs my attention?
  • In Lotus Notes (other than Outlook) you can file a single message in as many folders as you like, so it is closer to tagging that "classical" folders. Once users know that, would they use it?
    E.g. I have a folder 1Action2Day and then customer folders. I move messages often into both. When I dealt with it I just remove it from 1Action2Day. Folders in Notes are equivalent to the tags in GMail, they could be a little more prominent in the mail UI (we do have the @Formula to show)
  • In Notes I can at any time scroll and search in the "All Documents", so I don't need to settle for one or the other strategy. Depending on my needs different strategies work better.
  • Difference in search strategy depending on short or long term tasks. E.g. I file "Thank you" eMails in a specific folder, so I can pull them out at review time. They are elusive to search since there are so many ways to say thank you (made my day, your'e the hero, awsome job etc). Also I use topic folders (e.g. a customer project) to review "what happened on the messaging wire". Using tags/folders I only need to make that association once when I process the message and not everytime when I search. So there's a huge difference between: "looking for a specific message" and "getting an overview what happened to {insert-filing-topic-here}"
  • Knowing that search is available, does "remove from folder" shorten filing time (I use that quite a bit)?
  • Impact of filing assistance tools. The paper mentioned one for Outlook and overlooked that we deliver one for Notes for a very long time: SwiftFile (unfortunately Windows only)
  • Impact of ad-hock filing strategy. I didn't really create labels/folder upfront but create them as I go. I also do not file everything. But I do remove everything from the inbox
The interesting pointers at the end of the paper can be translated: classifying messages is boring (true), automated tools can/should help (true), more people information might help to (good topic for a follow on research). I would conclude: good meta data would be best (aka "What is the context of this message"), if it wouldn't be so unconvenient to create them.


Replace IE with XULRunner in Notes Client 8.5.3 on Windows

QuickImage Category
This is just in from the don't-try-this-at-home-since-it-is-unsupported you-have-to-dig-really-deep-to-find-it department. The embedded browser in the Notes client uses an operating system dependent default engine: Internet Explorer on Windows and Firefox (or to be more precise: XULRunner) on Linux and Mac. That cuts one out on Windows from the progress made in Firefox. Luckily our favorite place to alter configuration settings for the Notes client [notesprogramdir]/framework/rcp/plugin_customization.ini got a set of new settings:

The first one switches the embedded browser to Mozilla, the second one does that for the Mime rendering engine for eMails. I haven't seen a setting that would work on the display of a widget (the creation is done using XULRunner already).
Update: I stand corrected. The first setting is documented on a public URL in the Expeditor documentation and for the second one it could be deducted from the Interface spec. So it is just hidden in plain sight. The documentation suggest it would work in all R8.5 versions, someone could give it a try.
Update 2: My colleague Thomas Hampel remarked, that you can use Domino policies to push out these settings. Makes it very easy to handle, no fiddling with editors by end users required.


Large scale workflow application performance using Push Replication

QuickImage Category
Imagine the following scenario (live from a very large customer of mine):
A workflow application on a central server has at any time 500-800k of active documents. Any normal user would have access to about 50 of them, while an approver typically sees 1000-1500. Using a Notes client Domino will be mainly busy not to display a record (even if you follow my advise). Contrast that with a local replica of the same application. Since documents a user can't see are not replicated these local replicas would be tiny in comparison and offer a beautiful user experience. The only catch: if you work on a local replica you most likely will screw up notifications and an approver will get a request before the document is in her local replica. The sequence that needs to be followed looks like this:
8 steps of local workflow
  1. User creates a new request in a local workflow database and submits it
  2. Local replica replicates back to the server
  3. Approver gets notified that new data is waiting
  4. New data is replicated from the server to the client
  5. The approver makes a decision and submits it
  6. Data is replicated back to the server
  7. Requester is notified on updated data
  8. Data is replicated from server back to the requester's workstation
It is easy to see why workflow databases are hardly exist as local replicas. Replication as background process typically runs on schedule and doesn't tell when it is finished (other than in the replicator page). There is no trigger to tell a local database: now it is time to fetch. But if it was different? What if the requester would only do step 1 and 2-4 happens automagically? If the approver would get the notification after the data has arrived in step 4? If the approver does only step 5 and steps 6-8 also happen automagically, with the notification after local data has arrived?
This is exactly what Dragon Li from our Beijing Lab and I are working on. The prototype runs quite beautifully but currently requires both users to be online. We are using machine to machine notification, so the automatic steps can be completed in the background without disturbing the users before they get notified. The hooks for a notification persistence are ready and just need to be implemented. The beautiy of this implementation: we use the time tested replication, just we trigger it differently. No new protocol or emerging unratified standard is used. The application works through an innovative combination what is there in the Notes client for quite a while already. Pending our internal process this will hit OpenNTF soon.


LiveText and HashTags

QuickImage Category
Without vendor backing, a UN sub committee or never concluding standard consortia the standard for inline tagging has been established. It is the humble hash (or pound) sign:#. There is a bit of wobble if it should be lower or CamelCase, but the consensus on the # tag is universal. Using the Notes client's LiveText capabilities we can put hash tags in our applications or eMail messages to work.
Unfortunately the way to success is blocked by Regular Expressions which are a member of the three occult mysteries of IT (Map-Reduce and XPath expressions would be the other two). There are approximately 10 people who really understand them: the one who came up with them and the other one is hiding. Luckily there is StackOverflow, the online Regex Tester and SourceForge's Regulator (courtesy of Roy Osherove).
Ideally the regular expression would give us access to the hash tag with and without the leading #. After probing around and testing what I found on StackOverflow I settled with \#([A-Za-z0-9_-]+)(?![A-Za-z0-9_\]-]). One free Starbucks beverage of your choice if you can explain that one in plain English .
To put this to use you need to perform 3 steps (you can do them inside the wizard, but we split them out here for greater clarity):
  1. Configure a content type
    Right Click on the My Widgets Toolbox
    Right click on the "My Widgets" toolbox and select configure Content Type. We will have 2 parts to the content: the full content including the # Tag and the content without it.
    The HashTag Content Type
    I named the two properties content and tag but you are free to name the second as you deem fit (don't change the first one). Eventually (if I can figure this out) I will add a 3rd property that returns the tag all lower case.
  2. Configure a recognizer
    Configure the recognizer
    The regular expression is \#([A-Za-z0-9_-]+)(?![A-Za-z0-9_\]-]) which roughly translates into: "give me everything that starts with a # and has more than one letter/number/dash in it, return it also without the #. You assign group 0, which stands for "all of it" to content and group 1 to tag". Now we are ready for some action(s)
  3. Configure an action for your content (more about that below)
Repeat step 3 for every target you are interested in. Once you have completed all of this LiveText will highlight your hash tags and offer you all configured actions:
Hash Tags with 3 configured actions
As an example we search for Tweets with that tag.


Query your Out-Of-Office status using a web service

QuickImage Category
In a more distributed working environment presence information becomes more and more important. In the consumer space you "check in" using various services, in the enterprise world where is less important than how, where how stands for: available, busy, unavailable etc. In the Lotus IBM Collaboration world we have multiple sources of status: the Sametime status, your IBM Connections status updates, the IBM LotusLive status and - for longer periods - your email out-of-office status. All but the last can easily kept in sync thanks WildFire. Integrating the OOO service proves a little more tricky since it isn't exposed in a neutral API. So in a first step I designed a web service that can retrieve one or more OOO Status messages. In its first cut it ignores the OOO subject or the id of the requestor, but once you got the base version running it is easy to extend it. A few caveats:
  • You can (and probably should) place that web service in its own database. Access control to that database defines who can access that service
  • You will need the OpenLog library in that database
  • The service should be signed with the server ID, so it can read all mail files
  • If you want to make it run across servers, you need to configure the servers to trust each other (check the Admin help for that)
  • Since web services don't cache anything it is not the most performant option, so there is improvement potential
  • There are 2 methods: one for just one status and one to retrieve a batch of them


Customise your forwarding message layout

QuickImage Category
One of the big temptations in Notes is to customise the mail template. Done right it can boost your productivity. One very interesting customisation I came across was to display the Job function together with the sender's and recipients' names. That's a neat idea so I know what to expect. The information was lost when a message was forwarded and the default from/to display appeared. A quick check with a custom form showed that it worked as designed. Debbie enlightened me, that forwarding of Memo, Reply and ReplyToReply based documents triggers the use of a form called SimplifiedReply. Once you add your customisation there everything works as expected. Another little customisation nugget: the pink separator line is created using a subform named $FORWARDSEP. So if you like a different color or a different set of information better, that's the point to look for. Of course you promise to be careful when altering IBM provided templates.
Happy customising!


Mobile applications with Domino

QuickImage Category
The question croppes up in a lot of customer discussions lately: "How can I bring my Notes and Domino applications to my mobile devices?" with iPhone, Android and Blackberry cited most, followed by "any phone" and with a large margin Windows mobile (no version # specified) and others. There isn't a straight answer to that. Mobile applications actually represent another increase of complexity over web applications that represent an increase in complexity over client server applications. The complexity stems from the decreasing control over the runtime environment. Let me elaborate:
  • in a client server application you control all moving parts: the client code, the protocol on the network, the screen size, the server code. Clients can have their session state build in and you are free to distribute load evenly between client and server. You typically get away with a single language (e.g. Java, LotusScript or C#)
  • in a web application you relinquish control over your client runtime to a combination of browser vendors and IT department decisions which to pick and what to enable/allow. You are no longer free in your client language: it is a combination of 3 languages: HTML, CSS, JavaScript and probably a bunch of frameworks like Dojo or jQuery. Picking a single browser platform is an illusion since one of your execs will turn up with an iPad or iMac (and out goes your IE corporate standard)
  • In mobile applications you further relinquish the control over screen sizes, user interaction modes (is there a keyboard? Does it multi-task and how? Is there a stylus? What about a camera?) and network access: it might be online and offline. I discussed device diversity before.
There is a huge debate going on if an application should be native or use web technologies. As always in IT: it depends. Every approach has its trade-offs. There are three general options: web applications, native applications and hybrid applications. IBM doesn't offer a Lotus Notes runtime on mobile devices (never had) and Lotus Expeditor runs on none of the current mobile operating systems, so storing and synchronising data is a most likely issue if you want to take your applications offline (make them work without network connectivity). So this is what you are in for:
  • Web applications
    These are comparable easy to build, since they are a variation of the web applications you already are familiar with. Using OpenNTF's mobile controls. With a little hindsight when creating your custom controls you can develop large and small screen version with moderate overhead. But even with classic Domino when applied ZEN style mobile applications are easy to build. Obvious drawback is the dependency on an available (and sufficient fast) network connection and the lack of access to device features (like camera, gps or gyroscope). Also it is your responsibility to make the application look like your target platform. The OpenNTF mobile controls address some of these short coming, which technically makes them hybrid applications
  • Native applications
    Interestingly mobile devices, driven by a multitude of factors (subject to a different post) reversed the trend of moving all applications to the browser. The slogan "There's an app for that" went down very well. The obvious advantage of native applications are: you can stuff them into the respective app store, they can run without a network connection, feel like "the real thing" and have access to all possible device feature. The only disadvantage from a user perspective: it needs to be installed once (tapping into buying lust when browsing the app store overcomes that easily). Interestingly for mobile devices updating applications has been reasonably solved while classic computers need heavy hitting tools like M$ SMS or IBM Bigfix. The list of disadvantages for developers read a little longer:
    • Every device has different properties you need to take into account: speed, screen size, responsiveness and hardware configuration (camera, gyroscope, GPS - just to name a few)
    • The APIs and modes of notification, data synchronisation and multi-tasking differ in their philosophy. Latest claims that is has been solved awaiting real live prove. Synchronisation is stubbornly hard (you might want to check CouchDB to learn about differences. A nice approach also is to use MQ and a signalling approach to keep all data in sync. For Blackberry and Android sync can be handled by Teamstudio unplugged (with an iOS version allegedly under development) which has the distinct advantage that you can develop in Domino Designer using XPages.
    • Languages. There is no single language that is available for all devices
      • ObjectiveC: when you develop for iOS (iPhone/iPad/iPod) that is what you need to use. Not available on non-Apple devices
      • Java: Android uses it natively and you can use it for Symbian (in case it is still relevant), Blackberry and older Windows Mobile versions. Not available on Apple or Windows Phone 7
      • C/C++: available on Symbian, Android (via NDK), Blackberry and smaller players. Not available for Windows Mobile 7 or Apple.
      • C#/VB.NET: Windows Phone 7 only
      • AppInventor: Graphical development for Android only
      • JavaScript: Native to HP/Palm's webOS, for all others only via Hybrid applications
      So native applications will require substantial work. This might be justified when you plan a high number of devices
  • Hybrid applications
    They are build using web tooling: JavaScript, HTML and CSS, but have a small device dependent core that enables access to the device API and uses HTML5's offline storage capabilities. OpenNTF's mobile controls use the PhoneGap framework to run on iOS, Android, Blackberry, Windows Mobile, Palm, Symbian and soon MeeGo and Bada. Still supported featuresdiffer per platform. Phonegap also offers a Cloud build service. There are more frameworks to evaluate
  • [Update] Mono and AIR
    Andreas Rosen from QKom adds that additional client site platforms are Adobe AIR, MonoTouch (iPhone, iPad, iPod) and Mono for Android. With recent turmoil around Mono I wouldn't bet the house on it. Air runs on Apple, Android and the Blackberry playbook. I didn't find any mention of webOS (Palm), Blackberry phones or Windows Phone 7 (yet). Andreas' company offers SoapGateQ! to make data available for mobile devices which works well with AIR and any platform capable of calling a web service.
In conclusion: The only available Domino specific solution for native applications is Teamstudio unplugged taking care of data sync. For other native solutions you need to roll your own sync. For web or hybrid applications OpenNTF's mobile controls are they way to go (again leaving you to care for sync). So doable, but not pretty.
Keep in mind: Mobile applications are NOT shrunken desktop applications. To make them successful you need to strip them to the bare essentials. Also a finger is a much cruder pointing device compared to a mouse and lends itself rather to drag and drop than click (tap).
As usual YMMV


Testing Notes and Domino applications

QuickImage Category
Chuck Norris doesn't test. For the rest of us: you need a plan when preparing to test Notes applications. Traditionally Notes application testing has been an afterthought. After all adding a few fields onto a form and creating a few columns in a view was all you needed for an entry level Notes application. Like the frog that gets slowly boiled often we don't recognise the point when applications grew so big that a more rigorous test regime started to make sense. While TDD is all the rage for code driven development it hasn't taken a strong hold in the Domino world that is very visualy driven.
A lot of times the first consideration about a systemic test only arises when you want to upgrade a server: Will all my applications run? Do I need to change anything?
In general Notes and Domino are fiercely backward compatible, so the odds are with you. However the odd changes (like you used variable names that became class names) and subtle alterations in behaviour will require that you at least have a look.
Now testing an existing application that has been build anyhow is a complete different animal from building an application suitable for automated testing. It is like building a house with proper heating and plumbing vs. retrofitting this into an existing building - just ask your average castle owner. So let us have a look at the two use cases:
  • Testing an existing application

    This is the trickier use case since your application most likely deeply connects your user interface with your data back-end. So a few steps are necessary to perform tests.
    • Know your code: using Teamstudio Analyzer, DXLMagic, Ytria ScanEZ or OpenNTF's Source Sniffer you can get a feel for the code base: where are your agents, what @Db* functions are at work, where are external system calls, how complex is your code and where have developers committed a crime by omitting Option Declare. Teamstudio's Upgrade filters provide a handy way to get the pay-attention-to-this list when planning to test for upgrades
    • Prepare a base line version of your application. Base line doesn't mean empty, but fully configured with sample data (you want to test "this value gets edited and changed" too). ZIP away the base line, so you can repeat the test
    • Prepare an environment that reflects your use cases. The two main fallacies in testing are: using a small database and a fast network (and then act surprised when user complaint about performance of the large database on the slow network). So you might end up having a base line with a lot of data preconfigured. Using Apache's TCPMon you can simulate a slow network connection even in your exclusive single user gigabit test network. (There are more)
    • Write a test plan. Depending who (or what) is executing it, it can be short sentences or elaborate instructions. Make sure to include edge cases where values are close to a limit, exactly hit it or just exceed it a little.
      A Notes database is a good place to keep a test plan (I don't mean to attach a text document into a Notes database, but having a proper form with all test data). Test plans ensure that you don't miss something. Test plans are living documents. Your test plan WILL be incomplete, so be prepared to update them frequently (this is why every test deserves its own Notes document (with versioning)). When you look at a test plan you will find striking similarities to Use Cases. They are actual natural extensions. While the Use case describes what the user (or system) does, the test plan describes how it is done in detail
    • Pick a test tool that can replay testing sequences. Here it gets a little tricky. IBM doesn't offer a tool for Notes client testing. There is server.load, but that's mostly suitable for mail performance testing only. Luckily the Yellowsphere stepped in and SmartTouchan's Autouser fills the gap. It allows for both functional and performance testing. When you test Domino web applications your selection is wider. You need to distinguish between functional and performance testing:
      • Performance:

        Here you can test the raw Domino performance by requesting and sending http packages from/to the Domino server or the all over performance including your specific browsers JavaScript performance. Typically you go for the first case. Here you can start with JMeter or Rational Performance Tester (there are others, but my pay cheque doesn't come from there)
      • Functionality:

        Top of my list is Rational Functional Tester (same reason as above), but I also like Selenium which nicely runs in your Firefox. There are almost infinite combinations of browsers and operating systems, so to no surprise you can find a number of offerings that do the cross browser testing chore for you. Some of them can run against your intranet application, so you don't need to open up your applications to wild west.
      There is no magic bullet application, Testing is time consuming and comes with a learning curve (guess what a lot of interns do)
    • Run the tests in various environments (if you test for upgrades) or before and after code changes (if you test for performance or regression avoidance)
    • Be very clear: The cost for test coverage is growing exponentially and you can't afford 100%. A good measurement is to multiply the likelihood of an error with the economic damage it can do. If you spend more money on testing, you are wasting it. (Of course that is a slippery road if an application error can lead to physical harm, this is where I see the limit of "assess economic damage only")
    • Your test plan should start with the most critical and most used functions and then move to the lesser used and critical actions. Repeat until you run out of time or budget.
  • Building applications for testability

    Your all over test cycle gets shorter and more efficient when you design an application for testability from the very beginning. The approach is called "Test Driven Development" (TDD). In a nutshell: you write your tests before you write code (which will make them fail), then you write code until the test goes through. This works well for, well code. In a highly interactive visual environment like Domino designer that is much harder. Very often business logic is hidden (pun intended) in hide-when formulas. You (partially) need to let go of such ideas. Some pointers:
    • Put a business logic into script libraries. You can write test methods that call these functions more easily
    • Have a mechanism that generates your test data, so you can start over with a test series anytime
    • Use the XPages unit tests on OpenNTF
    • Abstract your logic from its implementation. So instead of writing @DbLookup(@DbName,"configLookup","departments",2) you write systemConfig.getDepartments() or at least systemConfig.getConfig("departments") and implement the actual data access in a library.
    • There is much more to say.... somewhen
As usual YMMV


Messaging Routing Puzzle - sharpen your pencils

QuickImage Category
How good is your grasp of routing structures and Domino configuration settings? Put it to a test. ACME corporation finally wants to replace their aging legacy eMail with a shiny new Domino 8.5. But like in every larger organisation they are very careful and opt for a prolonged co-existence between the legacy server and the new collaboration platform. So only one location shall migrate and depending on their success the others will follow. Here are the constraints:
  • Acme hates SPAM and viruses. So ALL messages from and to internet eMail need to be routed through servers/services provided by MessageLabs
  • Only one Internet domain shall be used.
  • The servers can see each other on a VPN connection
  • Users (at least on Domino) should see ALL users in the address book
Getting Routing right shouldn't be too difficult isn't it?
Now they wonder about:
  • Should both servers have public IP addresses? The MX obviously points to the MessageLabs servers
  • What connection documents with what settings do they need?
  • What Domain documents with what settings do they need?
  • Is a Smarthost configuration needed?
  • How does an entry of a legacy user in the Domino directory need to look like?
  • How can they eliminate the risk of circular routings?
  • Which server should mails get delivered to when coming from MessageLabs?
  • Which server (or both) should send out messages to MessageLabs?
  • How should messages from the Internet to Domino get routed?
  • How should messages from the Internet to the legacy mail get routed?
  • How should messages from the legacy mail to Domino get routed?
  • How should messages from Domino to the legacy mail get routed?
  • Should MessageLabs have access to the user list (so messages to unknown users can be rejected at lab level)?
  • What else can you recommend to watch out for? Obviously keeping coexistence short, but politics might prolong it
Can you help them?
Update: We are only interested in mail routing. No calendar, groups or migration. Plain routing only. The larger group of people are in the "S location" that also has better bandwidth.


Comparing document changes

QuickImage Category
From time to time I have the need to compare the documents of two databases for changes:
  • Before I clean-up a Domino configuration I create a backup copy (File - Application - New Copy). Once I'm done I generate the report of changes made - reliably without missing a field
  • We suspect data was altered by a rookie user. Using a backup copy all the changes made need to be highlighted
  • Suspecting that a replication went awry I want to compare two instances of one database on different servers
For these use cases I wrote a class that allows me to blend out certain fields or documents based on certain forms. The code is a rough cut and you might need to adjust it to specific needs. Anyway enjoy:
Public Class CompareEnging
    Sub New(sServer As String, sDB As String, tServer As String, tDB As String)
    Public Property Set reportOnlyChangedFields As Boolean
    Public Property Set eMailResults As Boolean
    Public Sub addExcludedForm(formName As String)
    Public Sub addExcludedField(fieldName As String)
    Private Function fixTextforHTML(orgText As String) As String
    Public Function report
    Sub reportNewDocuments(targetDB As NotesDatabase)
    Sub compareTargetDoc(tDoc As NotesDocument)
    Sub compareSourceDoc(sDoc As NotesDocument, tDB As NotesDatabase)
    Function documentSingleDoc(doc As NotesDocument, txtStatus As String)
    Public Sub compareTwoDocuments(sDoc As NotesDocument, tDoc As NotesDocument)

End Class

Function ReplaceSubstring(sourcestr As String, fromstr As String, tostr As String) As String
Download the Full Source code. As usual - YMMV


How fragmented are your Domino Server Drives on Windows - Help needed!

QuickImage Category
Disk fragmentation has been discussed here more than once and defragmentation is part of the ideal Notes upgrade and the Upgrade Cheat Sheet. In a comment Nico57 points out that the disk allocation scheme of Domino is a huge contributor, especially when compact -c is part of your database hygiene routines.
The irony of compact: your inner structures in the NSF will be squeaky clean, but the content will be (after a while) splattered over your disk like pimples on a teenager's face. The least you can do is to point all temp locations at least onto a different partition. So I had a chat with the engineering team and they are interested to learn about the magnitude of the problem. So I need your help. We like to know how fragmented your NSF on your Windows servers really are. This is what I want you to do:
  1. Download a copy of Sysinternals Contig.exe from Microsoft Technet and make it accessible from your Windows server (Copy it there, put it on a network drive you can reach, you know what you are doing)
  2. Open a command prompt on your server and type [Wherever-contig-is]\contig -a -s [Path-to-Domino-data-directory]\*.nsf > fragmentation.txt If you have database or directory links you have to repeat the process but then you need >> fragmentation.txt. This will list out the files and their fragmentation. Don't forget the -a parameter that tells contig to list only and not to actually defrag
  3. Find the database with the most fragments as well as the average number of fragments (that number is at the very end of a run). I would like to know these figures
  4. Head over to the Entry form and let me know. There are only 2 numeric values mandatory (and the selection of two dropdowns), the rest is strictly optional. Leave an eMail address if it is OK to contact you with a question or two.
  5. Leave a comment here (optional) if you have a question or would like to share special procedures or experiences.
Your contribution is really appreciated!


Tuning out of eMail conversations

QuickImage Category
I'm practising inbox zero for a while now. Doing that I realised a few very bad eMail habits getting in my way: mixing To: and CC: and Reply all. So what's wrong:
  • To and CC have very clear intentions. Recipients are in the To field, because the sender expects a reaction: a reply or an action taken. CC (originally Carbon Ccopy, now more used as "Just in Case Copy") addresses recipients who should know about the content, but no reply or action is expected (still one is free to reply or act). I frequently see these principles violated. Senders are coming back to me asking "Why didn't you reply" while I was only in CC or I find my name in the To field and there is nothing to be done by me. I'm very tempted to start to ignore mis-labelled messages
  • CC lists get longer and longer. Once you have been added to an email conversation is is almost impossible to get off it. Reply to All is too pervasive (and I'm partly guilty there too). The proponents argue, that once taken of such a conversation it is very hard to get back on and information gets lost to the individuals (that's why IBM Connections Activities makes so much more sense - also available in a LotusLive Activities flavour)
In Activities there is a possibility to "tune out" of an activity. You are still a member but it doesn't show up on your dashboard. Even if your are tuned out, new action items will show up in your ToDo list, so information doesn't get lost. I'm very tempted to add such a feature to my Notes client. A view, a form, some script would do:
Tuning out of eMail
How much actionable items would I miss then?


Who has access to your certifier?

QuickImage Category
Domains and Certifiers are the most commonly confused items in Notes and Domino. By habit (I wouldn't call it a best practise, it really is just a habit) they bear the same name, but they don't have to. You can have multiple totally distinct certifiers in one Domain and you can use one certifier across multiple domains - or mix both. Interestingly the same is true for Internet certifiers (mostly seen in https): You surf to the Domain but the certifier has been created by Verisign. Now when you go and try to borrow steal get access to the root certificate, containing the private key of Verisign's certificates you are up against some Fort Knox style security. Rightfully so. With access to the root certificate you can compromise the whole chain of security. The same is true for the that is used to create servers, users and organisational units for use in Domino Domain(s). When reviewing how secure these are kept I regularly find ignorance in holy union with incompetence. Quite often hired hands have boxes with floppies (less in these days) or memory sticks with all ids and certifiers on it and an open list of passwords. Contractors never change, contractors never have their own motives, contractors are always honest, never underpaid (the actually guy/girl, not the outsourcing fees) and from outmost integrity. Also they never make mistakes. And then you wake up. The whole pyramid of digital certificates, non-repudiation and authenticity crumbles without your PKI insufficiently protected. A security expert once stated it very clearly: "If you can not with 100% confidence confirm who ever had access to your root certifier (and might have kept a copy) since its creation, your PKI must be considered compromised". I would say, that is probably true for a lot of Notes shops. Happy recertification then!
So how does a proper and due process for certifier management look like? There are several steps involved Speak out loud every step and make sure all participants are briefed about the process and understand it:
  1. Setup a new machine with only internal network connection to your admin server and a working Notes Admin client. Make sure it is maleware free. Have a printer attached to it and install a Hex-Viewer
  2. Setup or make sure it is setup properly: the Domino Certificate Authority (Yes it is there since R6)
  3. Hook up a projector, so all participants can see what is going on
  4. Invite four to six signers (board members is a good choice) and some witnesses. Ideally have the corporate notary present
  5. Prepare envelopes and sheets of paper suitable for long term save storage (ask your notary for the right material)
  6. Create a new certifier ID in the admin client. Let one of the signers type the password, that is NOT shared with anybody
  7. Edit the certifier and configure it to hold four to six passwords (depending on the number of signers you invited), require at least 3 passwords to unlock it
  8. Let every signer add a password that should be really complex -- and long. Let them write it down first before they type it in
  9. Hand one envelope with the password to each witness and let them verify that the password actually works. Make sure they don't see any other password. If that fails back to step 6
  10. Now connect to the network and upload the new certifier to the Domino CA you configured/verified in step 2
  11. Let the envelopes be sealed and handed to the corporate notary to be stored in the designated corporate vault
  12. Open the ID in the Hex Viewer and print the hex dump. Put that in an envelope together with a pluggable storage (SD Card, USB Stick etc.) that contains a copy of the ID. If that media isn't readable anymore you can restore it from the HexDump print
  13. Let everybody sign a document that outlined what happened just now and hand that over to the notary for filing
  14. Last not least: wipe the cert file from the machine (not just delete, properly wipe) also wipe the temp files (the printer spooler stuff)
Now you have a proper cert, now it is time to create your IDVault and create your organisational certifiers. The organisational certifiers warrant the same attention and I would recommend to use them everywhere. What I'm curious about: "How do you secure your PKI?"


Sharing HideWhen Formulas

QuickImage Category
Yes - a Notes classic development post, since XPages knows a "rendered" property (labeled as "visible") but no "hideWhen". I was asked "How can I share HideWhen formulas?". The short answer is: "You can't". The long answer: "Once you separate the label from the logic it becomes possible". Let me elaborate:
Instead of adding business logic to your hidewhen formula in the formula window itself you just enter keywords that describe the condition, something like isManager or isConfidential (Important: that are FIELD names, so no quotes). I like using "is" as prefix since the keyword describes a state of the document and based on that state information is supposed to be shown or hidden. De facto you create your domain specific language here. Now you create one computed for display (CfD) numeric field with exactly that name, so you have fields named isManager or isConfidential. That fields need to have results that are either @True or @False. To get to that result you can use anything at your disposal including profile documents or <gasp>@DBLookup or @DBColumn</gasp>. You could allow simple AND or OR statements like isDraft | !isManagerHaving done that you can start thinking about reuse.
Pack all the fields into a subform and put it at the beginning end of your form. In computation sequence fields are calculated first before HideWhen is executed, so putting them last ensures that you have the right values in them and don't need 2 recomputations. Quite often you will not use all hide conditions in all forms, so you might perform lookups that are discarded without being used. To avoid that you can use the following strategy. Add a CfD field, type text - multivalue, before the subform. Name it "HideWhenUsedHere". In its formula (which would differ from form to form) you list the HideWhen types you want to use e.g "isManager":"isDraft" (Note: you have quotes here!). On a subform you would have a CfD field HideWhen_[Name of the subform]. When you add a subform you would adjust the HideWhenUsedHere formula to look like @Trim(@Unique("isManager":"isDraft":HideWhen_[Name of the subform])). Then add to the first line of each CfD field of your hidewhen-subform: @if(@IsNotMember(@ThisName;HideWhenUsedHere);@Return(@False);""); This way your eventual longer running lookup won't execute. The @isMember executes very fast. Of course you could construct it the other way around and have a field "DoNotUseTheseHideWhenHere", then the first line would look like this: @if(@IsMember(@ThisName;DoNotUseTheseHideWhenHere);@Return(@False);"");
It is not as elegant or powerful as a SSJS library in XPages, but once you wrap your head around it provides an easy resuable way for your "business visibility rules"
As usual: YMMV


What's your Data Definition Language?

QuickImage Category   
After recent insights in data structures I was wondering what's the right format to describe data models. Its a thought almost alien to Notes developers since we "just add what we need". Nevertheless having a data model eases maintenance and documentation (which brings up the question what comes first: the application or the data model). Clarity about data models also fosters the contract first way of developing applications where agreements about interfaces and data structures are made up-front before implementation. There are a number of contestants available to choose from (with no claim to completeness):
  • SQL Data Definition Language. Basically that's all the CREATE TABLE statements you use to create your RDBMS tables and views. Advantage of SQL DDL is its closeness to RDBMS, which makes implementing the described data easy, the ability to create the definition in a simple text editor but also rich visual tools (like ERwin which was one of the first of its kind) and the capablity of other DDL (like UML) to read/write SQL DDL. The biggest drawback in a world where SQL no longer rules alone is its closeness to RDBMS and its lack of support for transmittable data (think web service, sync or two folded MVC pattern). Today I would say SQL DDL is hardly the source of your data model anymore, but an output from one of the other DDLs (most likely UML)
  • UML and its XML representation. The Unified Modeling Language is designed to do much more than describing data. Besides data one can model other structures (like components, deployments or packages), behaviours and interactions. There is a huge offering of UML tools available on the market: Rational System Architect, Visual Paradigm, DIA (or Visio ), Violet, UMlet, Altova UModel (Windows only) and many more. UML can output almost any other format including export to XML Schema. There is also a lot of literature available about this topic.
    While UML certainly covers all aspects of modelling it doesn't come without drawbacks. Working with UML does require a dedicated tool or plug-in, so it is hard to quickly getting stuff done (graphics can get in the way of execution speed). UML offers 2 ways to model data: Entity Relationship Diagrams and Object Diagrams. None of them really fit the document working style we find in web services or web 2.0 applications, so some of the XML Schema generated look a bit "hammered into shape". Furthermore I haven't seen a lot of tooling that would verify data to conformance with an UML model at runtime.
  • Eclipse EMF models. EMF has been designed mainly as a modelling framework to generate code (and thus other data definitions) out of its models. EMF data itself is stored as XML and there is an impressive list of documentation available online and in print. Nathan is a big EMF fan. One would most likely work in Eclipse to work with EMF, I haven't explored the suitability for text editing. There also are a lot of transformations available from and to EMF, which are pending my attention.
  • XML Schema (including RELAX.NG and DTD). When you are comfortable reading and writing XML, using XML Schema comes natural. While you perfectly well can write it in a plain text editor, you most likely will use at least an XML aware editor like oXygen XML, Stylus Studio, XMLSpy (which all run inside Eclipse and are Schema aware), Eclipse's base XML Editor or any other of the countless offerings. If you deal with SOA you will realise, that WSDL, the contract language of SOA, uses XML Schema in its bowels. XML Schema also can be used to validate documents on the fly without the need to first generate code out of it. I like the capability to define my own data types and to mix and match existing Schemas to fit my specific needs. Custom data types would be classes/objects in UML and are (to my knowledge) absent in SQL DDL. Since XML Schemas live at a (usually public) URL and there are a lot of existing Schemas available already (diverse topics like music or eBusiness (UBL) etc) duplication of efforts can be reduced. XML Schema can be transformed to SQL (I would of course rather suggest to use PureXML, but that's a story for another day)
  • I didn't find any tool to model/verify JSON data. Since JSON is by definition schema free. However I expect, that some generic JSON schema will appear over time, the declaration driven validation is too valuable.
Looking at Notes and Domino, the XML nature of XPages, ATOM and RDF, Activity Streams and the IBM Social Business Toolkit I made XML Schema my first choice, however I'm ready to be convinced otherwise.
What is your Data Definition Language?


Software Rollout Worst Practises

QuickImage Category
Rolling out client software can be fun or a nightmare. One of my early IT experiences was to roll-out software on new computers (IBM AT to be precise) in universities in the state of Hesse. There was no network, no central software management and no IT manager with an (ill-conceived) plan. As student I got to visit all these universities on a corporate dime which was a cool intern project. And it was a green field deployment, so we had 100% control of the environment. That time I learned a great deal about batch/shell programming and how to succeed or screw-up a large scale roll-out. (They hired me back, so my ratio success/screw-up was acceptable)
Interestingly a lot of the lessons learned there hold true today. So before sharing my best practises for roll-out, let me introduce the worst (in no specific order):
  • Don't have a plan! Planning is for wimps. You could get an idea what really is involved and get clarity what is needed. Avoid at any cost software that both helps in brain storming and project planning like iMindMap (the "prettiest" one, nice organic lines), Mindplan (a native Notes application) or MindManager (my former favourite, before I switched to Linux)
  • Stick to the original plan! You ignored the first advice and created a plan, now at least stick to it and never adjust or revise it. Worked for the Soviet Union, will work for you
  • Don't ask, don't tell! Communicating update plans is sooooo uncool. You business users love the surprise of having their workstations blocked and be confronted with unfamiliar UIs and moved functions. Don't negotiate for a time frame, they never will let you do that roll-out
  • Trust your vendor blindly! Your vendor probably has exactly the same environment like you, so the install/update routines will work without you checking them, don't waste time to understand the process. In the same category: you won't find any side-effects since you don't test if the new software runs with existing files / data
  • Shoot blindly! There are two important steps: first don't gather intelligence about your targets. Information about CPU, RAM, disk size, available disk space, existing software and configurations only make you uneasy and dent your confidence in the roll-out. If you have to collect these information, at least let the users do that and send it in an eMail or individual text or spreadsheet documents, so you can avoid analysing them because it is "too much work". Secondly make sure your roll-out is one-size-fits-all. Base the roll-out on assumptions rather than fact. Bare minimum assumptions need to be: all machines have enough space, the same version of software is installed and the locations are fixed for software and data
  • Don't customise! The default installer/updater will do just fine, any adjustments can be done by the users or helpdesk later on
  • Don't automate! Best is to have just a little printed cheat sheet with steps to repeat. If you would automate you would remove our beloved "failure by typo" (OK, the very best: only have an oral history of steps). Also don't use software to push things out, your kids and their friends need that intern job after all. Sneaker network is king. Never ever look at BigFix
  • Don't backup! It only takes time, storage space and shows your lack of adventure readiness
  • Presume the workstations are alright! Messed up configurations, highly fragmented disks, space constrains or screwed up registries are things that only happen in other companies. And previous patches are for sure properly installed and all software is the latest version. Did I mention: Never ever look at BigFix IBM Tivoli Endpoint Manager
  • Don't have a plan B! Don't plan for rollback, restoring of computers or re-imaging of workstations. Users simply need to understand the perils of IT
  • Broad daylight is best! Why waste nights or weekends in rolling out software? Do it during office hours. Plan creatively. Sales and accounting love a month or quarter-end roll-out. Engineering prefers their rol-lout shortly before a project deadline. It removed the burden of doing their job when it is busiest
  • Data migration? What data migration? Do not account for the fact that files or databases need to be converted. That is the users responsibility. They need to schedule that out of their time
  • Big bang is best! Never ever try to do a pilot. Waste of time. Also make sure all installation happens at the same time. After all you need to see how your network and servers behave under extreme load. Big bang removes the need to test transition and co-existence scenarios.
  • Most important: Never ever train your users! You want IT to be the talk of town in your organisation. All that angry faces are just misled signs of affection. And your helpdesk wants something to do after all - make sure they are caught by surprise too
I've seen all of the above happening, what did you see?


eMail archival in PDF and electronic record keeping

QuickImage Category  
The question pops up quite regularly: "Our compliance department has decided to use PDF/A for long term record storage, how can I save my eMail to it?" (The question applies to ALL eMail systems). The short answer: Not as easy as you think. The biggest obstacle is legal need vs. user expectation. To make that clear: I'm not a lawyer, this is not legal advise, just my opinion, talk to your legal counsel before taking action. User expectation (and thus problem awareness): "Storing as PDF is like storing on paper, so what's the big deal?" In reality electronic record keeping has a few different requirement (and NO printing an eMail as seen on screen is NOT record keeping - more on this in a second). Every jurisdiction has their own regulations, but they are strikingly similar (for the usual devil in the details ask your lawyer), so I just take Singapore's electronic transactions act as a sample:
Retention of electronic records
9. —(1)   Where a rule of law requires any document, record or information to be retained, or provides for certain consequences if it is not, that requirement is satisfied by retaining the document, record or information in the form of an electronic record if the following conditions are satisfied:
(a) the information contained therein remains accessible so as to be usable for subsequent reference;
(b) the electronic record is retained in the format in which it was originally generated, sent or received, or in a format which can be demonstrated to represent accurately the information originally generated, sent or received;
(c) such information, if any, as enables the identification of the origin and destination of an electronic record and the date and time when it was sent or received, is retained; and
(d) any additional requirements relating to the retention of such electronic records specified by the public agency which has supervision over the requirement for the retention of such records are complied with.
(colour emphasis mine)
So as there is "more than meets the eyes". A eMail record is only completely kept if you keep the header information. Now you have 2 possibilities: change the way you "print" to PDF to include all header / hidden fields (probably at the end of the message) or you use PDF capabilities to retain them accessible as PDF properties. The later case is more interesting since it resembles the user experience in your mail client: users don't see the "techie stuff" but it is a click away to have a peek. There are a number of ways how to create the PDF:


More fun with curl - Web forms and access rights (you will not like this)

QuickImage Category
We had fun with curl and LotusLive before. Now lets have some fun with curl and web forms. To follow the fun you should download and install cURL (for Ubuntu users:sudo apt-get install curl). When discussing web enablement and XPages the topic of "field validation" pops up quite regular. My advice here stands firm: "client side validation solely serves the comfort of the users (which is important) but never the integrity of your data". More often than not I'm greeted with disbelieve. However at the very end what is send back to the server is just a HTTP post and the server developer has little control how that post was generated. In a modern browser one can use Firebug or GreaseMonkey to alter the upload. All browsers can be redirected to send data via a local proxy (try TCPMon for fun or any other local proxy). While that might be out of the reach of NMU*, a simple cURL command is not. I will use Domino as backend for my examples, however the approach applies to all type of backends: ASP, PHP, JSP, JSF, Ruby, NodeJS and whatever. Some of the considerations are Domino specific (around back-end behavior).
To send (HTTP POST) data to a server one can simply use curl -F 'fieldname=fieldvalue' http://targeturl. However that command line can get messy when there are a lot of fields and eventually authentication and redirections. So I will a) use a file as source for upload b) use the following command:curl --netrc --data @filename --post301 --post302 http://targeturl. The parameters post301, post302 make sure curl follows redirections if they occur without dropping the POST method, --data @ uses a filename as source for the upload, finally --netrc tells curl to look in a secure (mode 600) file .nerc ( _netrc on Windows) for a line: machine hostname login username password password (replace the bold parts with your information) if authentication is required. You can wrap this into a shell script to reduce typing. I created a simple form Fruit with 3 fields: Subject, Color, Taste and one view Fruits sorted by subject in my curldemo.nsf. Let the fun begin:
    • Test: Post Subject=Durian&Color=White&Taste=don't ask!!! to http://localhost/curldemo.nsf/fruit?CreateDocument
      In this case an authorized user posts exactly the fields that are in the HTML form to the designated address.
    • Result/Insight: Works as expected. You get a message back (with some html decoration) Form processed.
    • Test: Post Subject=Apple&Color=Green&Taste=sour to "http://localhost/curldemo.nsf/fruit?OpenForm&Seq=1"
      One could expect the same result as in the first test, since it is all fields, a valid url and an authorized user.
    • Result/Insight: Domino comes back with an empty form and no document gets created. On closer inspection you will find an extra hidden field "__Click" in the form Domino provides. So when you change your post data to Subject=Apple&Color=Green&Taste=sour&__Click=0 the document gets created as expected.
    • Test: Remove your user's right to create documents by demoting to author with "Create documents" unchecked. Post Subject=Apple&Color=Green&Taste=sour to http://localhost/curldemo.nsf/fruit?CreateDocument
    • Result/Insight: As expected you get an HTTP Error 401 (not authorized) message
    • Test: Promote your user to Editor and post Subject=Apple2&Color=Green And Green&Taste=deadly&__Click=0 to http://localhost/curldemo.nsf/Fruits/Apple2?EditDocument&Seq=1
    • Result/Insight: The document gets saved back to the server and you get a "Form processed" message back. Works as expected. If you use a fruit name that doesn't exist you get a 404 - Entry not found in index error. You can change the key field when you post Subject=Deadly Apple&Color=Green And Green&Taste=deadly&__Click=0
    • Test: Update the input validation formula of the Color field to @If(@Trim(@ThisValue)="";@Failure("Please submit a color");@Success). Post Subject=JackFruit to http://localhost/curldemo.nsf/Fruit?CreateDocument In this example we only post a subset of the fields to create a document failing the server side validation
    • Result/Insight: As expected the creation of the document fails and an error is returned.
    • Test: Post Color=Red And Green&__Click=0 to http://localhost/curldemo.nsf/Fruits/Deadly%20Apple?EditDocument&Seq=1 In this example we only post a subset of the fields back to the form
    • Result/Insight: The existing fields in the document are preserved and only the submitted values are updated. Works as expected.

      So far we were playing nice and got expected results. Now let's get sneaky.


Apache Axis and Sharepoint Webservices

QuickImage Category
In preparation for Lotusphere 2011 I had to deal with SharePoint web services. While SharePoint 2010 is transitioning to oData the bulk of APIs is still web services only as in SharePoint 2003 and 2007. Inspired by Stubby and Julian's excellent explanations I decided to use Apache Axis (It's a bit outdated and one probably would use Apache CXF for flexibility today, but for my use case Axis was sufficient and lean. Now SharePoint has a lot of web service end points (I always thought you have a few and distinguish by port and service, but who am I to challenge a Microsoft architecture) you might want to use in code. Axis comes with a nice little tool called WDSL2Java to generate all the Java classes you need. Unfortunately the ServiceLocator classes have the URL where the WSDL file was retrieved from hard coded as local variable in case you call the service locator without URL.
Now I don't like the idea to have an arbitrary URL in a Java file. So I considered 2 options: edit all generated classes and replace the static string with a call to a variable or an not-implemented error -or- find a way how all these classes can be generated for every instance where one wants to use them. Then the actual/current SharePoint server name would be in the variable. I decided for the later and wrote a little script. It's a Linux script and requires that you have downloaded AXIS and made it available on the Java classpath (easiest: copy the jars to the lib/ext directory of your jvm). I also used some Linux eye candy, so you might need to adjust that for Mac or Windows. The result is a JAR file you can use in your XPages, Agent or Java project(s). Here you go:


Adventures in new-z-land: Configuring LVM on zLinux

QuickImage Category  
This week I'm honing my skills on Domino for zLinux. z/VM allows to run a nearly unlimited number of guests with various operating systems. For Domino IBM supports Suse 64Bit Linux and Redhat 64Bit Linux. The Domino server is a full 64Bit implementation on zLinux. The mainframe also supports all sorts of storage. Typically one would find large disk farms with iSCSI connectors, but the "traditional storage" would be DASD (direct attached storage device). z/VM makes DASD devices visible as 1, 2, 4, 8, 24 and 54 GB storage devices. So instead of going after a large chunk of the attached iSCSI I decided to get a set of smalish DASD devices and test the Logical Volume Manager on Linux to combine them into a larger entity. This is relevant also for standard Linux server (with bigger disk to start with) once you run out of storage on your RAID 10 storage and want to add more. While Domino offers directory links LVM is more flexible since it will appear as single storage to the application. LVM is a standard feature of Linux and is available on all mayor distributions (You might need to install the files first). In my test I used RedHat Enterprise Linux 5.2 for zOS, where LVM is pre-installed (check this Redbook for details). LVM pools physical disks, RAID arrays or SAN storage into "Volume Groups". Inside such a group multiple partitions can be created that then are formatted with a file system. Volume groups can be extended just by adding disk and thereafter carry additional partitions or allow to enlarge the partitions to cater for more data
Linux Logical Volume Manager
Depending on the file system it might need to be unmounted before it can be extended, so we typically don't see the root ( / ) or boot partition inside a volume group. There are a number of steps involved to configure the LVM which are almost identical on all Linux versions, except for dealing with raw storage (which would be DASD in my case). I configured all steps using the command line. If that is not your cup of tea, Redhat provides the command system-config-lvm that provides a GUI (which I used for the screen shots). To get the GUI working you need to login into zLinux with ssh -X to start support for XWindows (Windows users: check what application supports XWindows). As first step we want to check


Adminstration rules #2, #4, #21

Sriram asked for the rationale behind rules #2, #4, #21 of the Golden Rules for Domino Adminstrators. So here we go. These admin rules are designed to protect from trouble. Breaking them requires that you know what you are doing, which is quite different from thinking you know what you are doing. So unless you have three good reasons (one is not enough, also two is not sufficient, three is the number) stick to them.
  1. Never ever use operating system tools to manipulate (copy, delete, rename or create) Domino databases. This includes using FTP to move databases to other locations!
    When you touch databases with operating system tools you expose yourself to unnecessary risk. You might be logged in as a different user than the Domino task and break file attributes (owner/permission). When you copy a NSF you actually create a replica - and 2 replicas on one server is forbidden (it definitely screws up DAOS). You also might end up to copy a file out of reach. Database deletion/rename is an adminp task so links can get adjusted. When you FTP a NSF you copy the ODS and eventual containing problems (beside the file access rights). With replication only documents get transmitted - if that doesn't work you have a problem you need to fix anyway. Furthermore you risk to try operations while a Domino is running and corrupt the databases in the process. So all risk, no rewards
  1. Always use the server console for unscheduled replications. Never use the client replicator page for that
    The replicator page runs from the user workstation with the user rights. You will route the replication traffic through the client network. You will screw up replication formulas (if there are any) and you won't discover a connection issue between the 2 servers. You also block your own replicator. All disadvantages, no rewards
  1. Never to use backup copies of ID files when users forget their password. Use IDVault and password recovery
    The fact that an admin has access to a backup copy of an ID file including a password invalidates the WHOLE security model since an admin can decide to impersonate anyone. It is sloppy security not worth to be called that. Also old copies might have old keys. Handling id files is staff intensive. So: All work and risk, no rewards
    Update: As Manfred pointed out in the comment: includes the risk of permanent data loss when document encryption keys are in that id.
Of course that's just a very compressed summary. Keep in mind: We must all face the choice between what is right and what is easy
As usual: YMMV


Golden Rules for Domino Adminstrators (courtesy of Manfred Meise)

QuickImage Category
Manfred Meise of MMI Consult has compiled a list of Golden Rules/Administrative Pledges for Notes/Domino administrators. To save English readers the trouble of Google English I've translated (and amended) them for you:

The trouble free operation of a Notes/Domino infrastructure requires attention to (and observance of) a few basic rules. Or put it different: Ignoring or violating these rules is asking for trouble. The list doesn't claim to be complete and will (based on sad experience in projects) be extended in future.

As as responsible Lotus Domino Administrator I pledge the following:

  1. Never ever allow two replicas of a database to be stored on the same server
  2. Never ever use operating system tools to manipulate (copy, delete, rename or create) Domino databases. This includes using FTP to move databases to other locations!
  3. Never restore a Notes database directly to the server where it came from (unless the original is gone) - See also #1
  4. Always use the server console for unscheduled replications. Never use the client replicator page for that
  5. When using the transaction log always use a physically separate disk with sufficient space
  6. When using "archiving" transaction logging, ensure that the backup software does support this
  7. To create databases from templates always use File -Application... New.. Never copy or rename files on the OS level
  8. When creating DIR links always point them to destinations outside the Domino data directory. Never point two dir links to the same destination
  9. Neither install a Notes client on a Domino server, nor run the client part of the Domino server
  10. Set "Use operating system time" (including DST) on all clients and servers
  11. Ensure a proper backup. Replication (even cluster replication) is no replacement for backup
  12. Keep the servers running especially at night, so the house keeping functions can run
  13. Remove replicas from servers that have been offline for extended periods of time (e.g. 6 weeks) and fetch new replicas
  14. Replicate all system databases at least once a day (that includes: ensure the system databases are replicas)
  15. Never grant manager access to servers from other domains
  16. Always ensure system templates are the highest release of Domino servers in use. Templates are backwards, not forward compatible
  17. When using OS level virus scanners exclude at least the data directory. In a multi-user (client) install exclude all data directories of all users, at least *.nsf, *.ntf, *.box
  18. Every Domino server will be protected by a Domino aware virus scanner that can check the database content including the attachments
  19. Only run the server tasks that are in use and ensure proper configuration (keep in mind: some tasks like statistics and adminp are not "used" by users but still relevant)
  20. Never ever delete a person document when user leave. Always use the adminp process to ensure complete removal and addition to a deny-access group
  21. Never to use backup copies of ID files when users forget their password. Use IDVault and password recovery
  22. Ensure that users' workstations have the current set of drivers (first and foremost video) installed
  23. Fight disk fragmentation by regularly defragmenting Notes and Domino program and data drives
Thank you Martin!


Notes / Domino upgrade cheat sheet

QuickImage Category
Over time I've written a number of entries related to Domino upgrades, Domino administration and helped many customers to succeed in implementing the latest versions. This entry summarizes what works and provides links to the relevant articles and information serving as a convenient entry point. It is linked in the sidebar and I will keep it current.
  1. Make sure you know how the ideal Domino upgrade looks like
  2. You need a plan, a project plan. Head over to MindPlan and get your version of a mindmapping cum project planning Notes composite application
  3. First make yourself knowledgeable by browsing the IBM Fixlist. It is always a good idea to know what is coming
  4. If you plan to jump versions, you should be on the latest patch level of your current version. IBM Fix Central provides the latest patches to get you there
  5. Visit Upgrade Central to get the latest recommendation
  6. Make sure you build a high performance Domino server for best user experience
  7. If that server is a new one, you can minimise your downtime when upgrading
  8. Revisit you Replication and Routing Architecture. It might be just fine, it might need some TLC
  9. If your Domino server runs on Windows you need to take care of disk fragmentation. Use OpenNTF's DominoDefrag or the commercially supported DefragNSF. If you run Notes clients on Windows, fragmentation is an issue there too. DefragNSF has a client module to take care of that
  10. When rolling out your client upgrades you need to rethink your installation location and installation type (I strongly advocate shared install), especially when you want to be Windows7 compliant. Smart Upgrade can't change location or type, so you want to invest into the Panagenda Marvel Client or the BCC Client genie for upgrade, roaming and management
  11. Disk fragmentation is an issue on your clients too. The minimum you can do about it is to incorporate a defragmentation run while installing using the free MyDiskDefrag. Better you look for continous defragmentation. Both DefragNSF and the Panagenda Marvel Client support continuous client defrag. You can also consider to use MyDefragGUI which provides a screen saver that defrags whenever your machine is idle (Added March 09, 2011)
  12. Revisit your eMail retention policies. Once you understand the difference between Backup and Archival, you might consider an eMail life cycle solution like the iQ.Suite
  13. If you would like to optimise your infrastructure and benchmark it against thousands of other installations, give TrustFactory a call and schedule a Health Check
  14. To improve the quality of web mail (both your use of webmail and the Internet eMail going in and out) consider Geniisoft's iFidelity plug-in for your Domino server
  15. While cleaning out your installation you might want to bring some order to your group names. It saves a lot of administrative time. To switch your user administration into auto-pilot mode consider HASDL FiRM, the Federated identity and Resource Management
  16. Give your users the opportunity to share ideas and deploy IdeJam by Elguji
  17. Last not least. Make your users more productive! Adopt GTD (after you read the book) and deploy the eProductivity template
And never forget why you are on Domino and what happens when others try to move you away (even Accenture struggles - andd the Internet never forgets).
Just to be clear: this post mentions several commercial offerings from IBM Business Partners. You need to evaluate for yourself how an investment into these offerings makes sense for you.
As usual YMMV


Loading HTML or XML Content in LotusScript over HTTP

QuickImage Category
Your application needs data that are stored on a web server. If that data is available through a web service your are lucky. Since R8 web service clients are supported in LotusScript. If you want to load data from a URL you are out of luck. Typically you would resort to ActiveX and use the IE component to do the retrieval which introduces 3 evils: a Windows dependency, an IE dependency and an ActiveX dependency. The other way is to use Java, which turns a lot of LotusScript developer off. The solution is to use a ready made library that can wraps all the Java you need into a convenient LotusScript class. The use case I had was to read HTML from a remote site and return a specific table for further processing. So my class has an XPath parameter that allows to slice out some part of the returned HTML. This is how you would use it in LotusScript:
    Agent UpdateHTMLOnChange
    Created May 28, 2010 by Stephan H Wissel
    Description: Reads all documents that have been flagged
    as changed and retrieves the update HTML

Option Public
Option Declare

Use "HTTPUpdatesLS"

Sub Initialize
    Dim updateClass As HTTPUpdates
    Set updateClass = New HTTPUpdates
    Call updateClass.UpdatePendingDocuments()
    Set updateClass = nothing
End Sub


Update all views in a database - web style

QuickImage Category
It is a bad idea to check the property "hide from Notes client" on a view. If you do this you burden your HTTP task with refreshing that view rather than leaving that to the indexer task (which would run in its own thread on its own processor(s). The better way to hide views from Notes clients (for convenience) is to start their name with (. The Performance impact can be quite positive once you fix them (and bad if you don't). Until you get the approval and find the time to fix the view visibility a method is needed to quickly update all database views from a web perspective. In my toolbox there is a little script that does exactly that. The nice aspect of it: it lives in a utility database and takes the database to work on as a parameter:
http://mywebserver/senseitoolbox.nsf/webupdateviews?openAgent&database=apps/hr/webleave.nsf The agent renders a page with a link to each view in the database (I don't check for the "hide from web" property) and an Ajax call that opens ever view once through the URL and records completion. Completion doesn't imply success. I'm not monitoring for return codes or so. The agent is a good interim helper until the view visibility to the indexing task has been restored.
As usual YMMV


Shortcut for launching Lotus Notes

QuickImage Category  
I'm a fulltime Ubuntu user. And I love keyboards (probably due to the fact that my first computer was a IBM S/36). So I use shortcuts a lot. Recently I installed Ubuntu Tweak. Besides a lot of other kewl stuff it allows me to define arbitrary commands and assign them to keyboard shortcuts. So I assigned Ctrl+Alt+n to Lotus Notes, so it is just one key press away (Use this command line:opt/ibm/lotus/notes/framework/../notes). On the same note (pun intended): I'm using Cardapio for my menu. It has (like Spotlight in OS/X or Windows7) a Super - Space shortcut that opens the menu and focus on the search box. You can type and on key press results from the menu, a local search and a web search are presented to choose. It even lists software you can install from the Ubuntu Software centre.


Protect your Domino applications from Firesheep

QuickImage Category  
The appearance of Firesheep and the resulting awareness is a good thing. The threat posed by "sidejacking" of cookie based authentication has been around for quite a while (not as long as other Fire sheep), just use a packet sniffer like Wireshark or any other sniffing, penetration and Security Tools.
Safeguarding your applications requires securing the transmission lines. There are 3 general ways (note: this distinction isn't technical accurate, but clarifies the options): server/application provided, network provided and user selected.
  1. Network provided security can be a VPN or encrypted access points (which still leave options to interfere at the end-points)
  2. User selected are conscious or automated choices to insist on encryption (ZDNet has more details)
  3. Server/application provided is the ability and insistence to encrypt the whole session, not just the authentication process
In Domino this is quite easy:
  1. You need to acquire an SSL certificate either by buying one or create your own
  2. Next you install and activate the certificate on the Domino server. Catch here: you need distinct IP addresses if you have more than one domain to secure. A HTTP 1.1 header isn't good enough.
  3. Now you need to consider: you you want to secure all databases for all connections or only databases where you expect users to login. If you decide on a database per database approach you can check the database properties and require SSL for a connection (that's a good time to disable HTTP access for databases you don't want to access from the web UI)
    Database property for SSL access
  4. If you decide, that any authenticated connection must use HTTPS all the time you can configure the HTTP server to do so. In your server document you should have switched to "Load Internet configurations from Server\Internet Sites documents" long ago. If not, now is the time.
    Configure to load config from Internet sites
    In the internet site document you can decide to reroute all traffic to HTTPS or just the authenticated access
    Security settings in Internet site document
  5. Restart your HTTP server tell http restart
As usual YMMV


Create an Enterprise Event Calendar on the Cheap

QuickImage Category
With Notes 8.5 comes the ability to overlay your own calendar with other calendars. An obvious use case for this functionality (besides your kids school calendar) is the list of corporate events. To get a corporate event calendar follow these easy steps:
  1. Create a new database on your server. Call it Corporateevents.nsf and base it on the standard mail template
  2. Give normal users "No access" with "Read Public Documents" access in the ACL
    Give read access to Public Documents
    You can add this setting in the preferences too, same effect.
  3. Create a mail-in document in your Domino directory pointing to that database
    Mail-In Documents are in the Directory
    Sample Mail-in document
  4. Open the database and edit the profile (More - Preferences)
    1. Set the mailbox owner to the mail in name you just created (Corporate Events in our example)
      Change the Mailfile Owner
    2. Edit the Autoprocessing settings in Calendar & ToDo to automatically accept all incoming invitations
      Calendar Autoprocessing Options
    3. Check the access settings to verify our ACL change above worked (or make the changes here)
      Mail access settings
    Save the preferences and you are good to go
How to use this:
Anybody who wants to manage an event would simply create a calendar entry (meeting invite) in his/her own calendar and invite "Corporate Events" to the "meeting". In the meeting body all event information like agenda, directions (take the staircase, all 27 steps, turn left at the water cooler) and other useful information can be listed. Almost always corporate events undergo changes when moving through their planning stages from idea to proposal to planned to confirmed. The event owner just needs to update the calendar entry in the personal mail file to keep the corporate events listing up to date. And we all know nothing is closer to you than anything in your mail file (except for Luis of course).
As usual: YMMV


Access control in Domino - Part 2: Inside your NSF

QuickImage Category
So you made it into the NSF. What you can do (or not) depends to a great extend on what you see. What you can see depends on your access level and membership in eventual existing Readers and Authors items (we commonly refer to items as fields, but that's not exact. Fields are the placeholders in forms to enable users to enter items, either explicit by typing them in or implicit by executing their formulas. Items is what gets stored in documents).
There is one item and two item types that govern the visibility of a document for all users and servers. The item has already been introduced in Part 1: $PublicAccess. If this item exists and is set to "1" (Text!!) a document is considered a public document. If the item is missing or "0" the document is considered a standard document.
The two item types are Authors and Readers items (mostly added by means of Readers and Authors fields). There can be more than one of each per document, but there is a limit per item type of 32k for all entries of all items of that type. The document properties (see below) also allow to specify readers, but that's just some UI. If anything else than "Readers and above" is specified there, an item with the name $Readers gets created and stored in the document.
Read protection gets activated on a document by document basis once one of the items of type Readers in the document has a non-empty value. The presence of Readers items alone doesn't activate read protection. Domino does inclusive access rights. So once you activate Read protection you have to list (explicit or implicit) all readers to a document. Domino doesn't have a concept of "all except ..."
Document properties showing read access
A common fallacy is to add "to ensure access" the server name or server group name to a document into a Reader item (with the help of a computed Readers field). This activates Read Protection even if it might not be necessary (Read protection costs CPU power, so you don't want it when you don't need it). A better approach is to either add them to an Authors item or anyway have only one computed Readers field that has the server group or role in its formula and pulls in other names using Names items (from user entered Names fields). I often included a Authors item with the value [Joshua] without actually having this role in the ACL (Anybody gets the reference?). Authors items do not activate Read Protection but entitle a member of an Authors item to see a document (you only can edit what you see). So we end up with the following read access table:
Read access to documents with and without Reader items
To determine membership in the permitted Authors or Readers Domino evaluates your name, your group memberships and your roles (for local replicas you need to switch on "Enforce consistent ACL" for that to work).


Access Control in Domino - The Basics

QuickImage Category
One of the nice effects of XPages is the influx of "young blood" to Domino development. While they have no problems with "the other skills", all things Domino are new to them. Since the first homework I give them is to subscribe to and create an account on to chat with us, I'll post some more entry level articles to help them along. This post is one of them. It introduces Security and Access Control in Domino.
Access Control is strictly hierarchical in Domino. If any access attempt on a higher level fails it doesn't matter if you would have access on a lower level. The resource is locked away. It is like: "it doesn't help you that your car is unlocked if you can't get into the garage where it is parked". The access levels are: server, directory, database and document.
  1. Establish the users identity (or the lack of it). This can be through the Notes PKI, Username and Password, a X509 certificate or one of the supported SSO schemes (like LTPA). Authentication is triggered explicit by a logon request (&Login) or implicit when a resource is accessed that requires more access than the current level (that applies to anonymous and authenticated users in the same way)
  2. Once identiy is established Domino checks if that user is allowed to access the directory and NSF
  3. The ACL is opened and checked what the user can do. This can range from No Access to Manager
Domino Access Control Levels
The access to a server is governed by the server document and the Internet configuration documents for your Internet domains/subdomains. Default for all Notes clients is to require authentication. Notes client authentication happens with the use of Notes' PKI infrastructure, so the server must trust your ID, OU or O. This happens implicit when you belong to the same OU or O. Once you are authenticated the Domino server can check your authorization. It uses your full X500 compliant name for that (like CN=Peter Pan/OU=Toons/O=Disney). Using a standard Internet protocol like http(s), pop3, smtp or imap authentication uses username and password. For http(s) there is also an option using X509 certificates. Once you have authenticated you also are "known" to the Domino server with your full qualified name X500 style. That name might look different if you store users not in the Domino directory (or an additional NSF based directory) but in LDAP. Stored in LDAP your name could follow LDAP syntax and look different (like cn=Peter Pan,ou=Toons,o=disney). Quite often your admin will allow anonymous access to your Domino server (for convenience in the Intranet or for general viewing in the Internet). In this case you are known as Anonymous. This is also the case if you use Domino designer to web preview a local NSF. Once Domino has established who you are (Authentication) it checks what you can do (Authorization). So you might successful authenticate but the Domino server figures (based on your identity including group memberships) you are not "on the guest list" and will deny you access.
Once you gain authorised access to the server Domino checks the next level of authorisation. If you access an NSF inside the Domino data directory that would be the Access Control List (ACL) of the respective NSF. However when a database is stored in a directory that is linked using a Domino Directory Link an additional authorization check is performed against the security list in the dir file (I haven't seen much use of that, but it is a valid option - especially if you want to hide the existence of an NSF from fellow server users).
The next level check is the ACL.


Speed boost for your Notes client on Linux

QuickImage Category
You need enough RAM. Follow these steps (adopted from here):
  1. Stop the Notes client
  2. Create a new directory: mkdir ~/notestemp
  3. Edit your notes.ini and add:
  4. Edit the mounting table: sudo gedit /etc/fstab
    Add this line (one line, tab separated values):
    tmpfs /home/[yourid]/notestemp tmpfs defaults,noatime,mode=1777 0
    and save the file
  5. Mount the new directory:sudo mount -a (a restart would do too)
  6. Start the Notes client
  7. Optional: Move more temp stuff to tmpfs
(Don't forget: [yourid] stands for your login name, don't take it literally). As usual: YMMV.


Accessing "Arbitrary Data" in Notes Documents (Sametime BuddyList) followup

QuickImage Category
Yesterday I stated "Neither the LotusScript nor Java API allows us to process this item type" about a Notes item type "Arbitrary Data". Today I stand corrected. It turns out, than since R65 we have NoesDocument.GetItemValueCustomDataBytes and NotesItem.GetValueCustomDataBytes. Carl Tyler from Epilio (Remember: Sametime without Epilio is like Sushi without Wasabi) filled in the missing blanks. The method requires a data type and Carl shared that the data type for the BuddyList is UbqOpaque. The second important information: Buddy lists are stored in an item named "8193". If a buddy lists grows to big additional items are added "8193.1" "8193.2" etc.
So I wrote a little agent that now extracts the whole buddylists into the C:\export\ directory. One interesting observation. All budy lists (and I had some with double byte names) started with the bytes 110 7 0 0 before the <?xml... First I though that to be Unicode Byte Order Marker (BOM), but it seems they are not related to that. So when you want to process these files you might need to edit them first. Inside my Java class I take care of that. The updated code can be downloaded including the source code as before. When running the report I found that I had to open and save the exportbuddies.xml before the XSLT transformation would run properly.
As usual: YMMV


Learning Java

QuickImage Category
Java is the workhorse of corporate applications. No longer considered sexy but battle hardened and mature. For a Notes developer there are a number of reasons why some Java knowledge is essential: LotusScript doesn't allow network access (and "cheating with COM objects" doesn't count), doesn't provide threads or, most important, access to a rich eco system of ready baked libraries. You want to create some fancy components for your Notes client or a sharable extension library for XPages, Java it is. But how to get started? There are a number of resources that give you easy access: Once you get to terms with the fact that Java is case sensitive your first program shouldn't be far off. Still don't know how to start? Copy the lines below:
  1. package myfirst;
  3. public class HelloWorld {
  5.     public static void main(String[] args) {
  7.         System.out.println("Hello World");
  9.     }
  11. }


Virtues of Document Databases

QuickImage Category
Lotus Notes is a document oriented database: A document oriented database stores information as documents of related data. All of the data within a document is self contained, and does not rely on data in other documents within the database. This can be quite a shift if you’re used to working with a relational database, where data is broken up in to multiple rows existing in multiple tables, limiting (or eliminating) the duplication of data. Although radically different, the document oriented approach is a very good fit for many applications. For some applications, data integrity is not the primary concern. Such applications can work just fine without the restrictions provided by a relational database, which were designed to preserve data integrity. Instead, giving up these restrictions lets document oriented databases provide functionality that is difficult, if not impossible to provide with a relational database. For example, it is trivial to setup a cluster of document oriented databases, making it easier to deal with certain scalability and fault tolerance issues.
Lotus Notes is not the only one. There is the "son of Notes" - CouchDB (I'm looking only at the database part, not programmability etc.) and others. The description above is shamelessly lifted from a CouchDB article.


Caveat when upgrading to Notes 8.5.2 CD5

Little beta oddity. When installing Notes 8.5.2 CodeDrop5 on a blank machine it works like a charm. Installing it over an existing Linux or Mac installation worked well too. On Windows using a full client (including Admin and Designer) I came across an odd behavior (It is beta after all). Notes would start into the "Loading please wait" splash screen and go into an endless loop. I tried a clean reinstall only retaining desktop8.dsk, names.nsf and notes.ini (having moved other NSF out of the way for the time being). The loop wouldn't go away. Turns out the culprit was something in the notes.ini. When stripping it down to a minimum (plus some convenience entries) Notes started really fast - faster than any previous 8.x version. I don't know if the problem also occurs with the standalone standard client. This is the notes.ini that I used:
KitType=<<your existing value>>
Directory=<<your existing value>>
InstallType=<<your existing value>>
NotesProgram=<<your existing value>>
Location=<<your existing value>>
KeyFileName=<<your existing value>>
KeyFileName_Owner=<<your existing value>>
MailServer=<<your existing value>>

As usual YMMV


Ease your Notes Client Rollout

Managing client software can be easy when planned and administrated properly. A good deal can be achieved using Domino policies, Smart Upgrade and some silent install options when rolling out. If you want to remodel your environment (e.g. to move the data directory into to user profiles and clean things up) you could consider using the Panagenda Marvel client or the BCC Client Genie (both made in German speaking Europe).
But there are more options to consider. As you might know Notes can run off a memory stick (Lotus Nomad), technically more correct: off a removable medium. Nobody stops you from preparing an ISO image and mount that on the machine you want to use it. There are sufficient tools around to help you on the mounting process. Of course Nomad comes with a set of its own challenges. Following the idea, wouldn't it be great if there would be a virtual file system that would handle a virtual registry and reduce your whole install to a single copy of one executable file. Many years ago (in a different life) I used a tool that did exactly that. Its name was Thinstall. It was provided by a small innovative company, which was OK for my requirement then, but would have me worried for an enterprise deployment. The company is no more, since EMC/VMware realized their potential and bought them. The tool is now called VMWare ThinApp and has matured from its already promising beginnings. A client licence goes for below 50$ for the runtime, but you need the ThinApp Suite 4 to package your application. Just imagine: any app you deploy is an install free single exe file. That would be just like one of the mobile app stores.
As usual: YMMV


IBM OneUI v2

In Domino 8.5.1 the IBM OneUI v2.01 files were sneaked in. In Domino 8.5.2 they will be official. (I've written about IBM OneUI before). So it is time to look at the official 2.0 documentation (or the 2.1 version) and make yourself familiar with the concepts. To use the OneUI ou just manually type oneuiv2 as your theme and your application will pick it up. Make sure that you type that all small, so it works on all servers. On my last count the styles contained 287 unique .lotusXXX class names and 102 .xspXXX class names in various combinations of HTML elements and nesting orders. So somebody spend a substantial amount of time to get all the various cases covered. I probably will blog about the structure a little more in future. To make use of these classes more convenient I updated my dummy stylesheet and provide 2 stylesheets, one for .lotusXXX and one for .xspXXX to include in our projects. There are no CSS definitions inside, so you won't have unwanted side effects.


Enabling private contacts

QuickImage Category
On a Notes client your email is stored in your mailbox, your contacts in your address book, typically the names.nsf. Since that file is hardly replicated back to the server (unless you roam) a different place for contacts was needed, so you can use them in Lotus iNotes or Lotus Traveler. So contact sync was born. It its latest incarnation it is a task on your replicator page, that makes sure that the contacts in your local address book are kept in sync with your mail file and can be managed and groomed by your personal assistant (if you have one). All contacts are copied into the mail file with the field $PublicAccess set to "1". So anybody who can read your calendar entries (the calendar entries, not just your free time) will be able to see your contacts as well. Typically that would be a team member or a personal assistant. However you might want to keep your addresses in your address book a little bit more personal (the public ones are on Facebook anyway), only allowing them to be seen for users who can see your eMails too.
Calendar sync in the replicatior
Currently you can use the [x] Mark private check box for a contact and it will be hidden from anybody but you, the LocalDomainServers group and anybody with the role [PersonModifier] (a role that doesn't exist in normal mail files).
Keep your contacts confidential
To keep it a little more open we need to customize our contact form a little (Remember you need to follow some guidelines when customizing IBM templates). When you inspect the Contacts form of your names.nsf (or your copy of pernames.ntf) you will find a checkbox "Confidential". It is the box that shows the [x] Mark private check box
Checkbox to keep contacts confidential
Behind that check box you now add your one field named $PublicAccess. Make it a check box too. Give it a value of Hide from calendar users|1. You might look for a clearer wording here.
Use of $PublicAccess
In the input translation formula you add a little @Formula magic to make it work with the confidential field: @if(Confidential="1";"";@ThisValue). With the check box set (which won't stay if if is confidential) only user who can read your emails will see the addresses, but not those who only can see ones calendar.
Hide From Calendar Users
As usual - YMMV.


Domino and Java Performance

QuickImage Category
Java was a sleeping beauty in Domino land. Looking at the version history:
  • Domino 6.x : JVM 1.3
  • Domino 7.x : JVM 1.4
  • Domino 8.0x : JVM 1.5 a.k.a JVM 5.0
  • Domino 8.5x : JVM 6.0
  • Domino [some-future-version] : JVM 7.0
If you do anything with Java on Domino check out the IBM performance data
Java Performance gains from JVM 5 to 6

Another reason to stay current with the Domino versions.


RPC Monitor - Watch your wire

The old IT joke goes: "If something is broken, it is always the cable". In our networks the closest you get to a cable is the low level protocol a client (Rich Client or Browser) is using to talk to the server. Some protocol analyzer should be in the bag of tricks of any admin and developer. For low level work there are tools like Wireshark, but they may be a little too deep for our needs. Looking for something simpler you will find Apache TCPMon for monitoring your HTTP traffic (cross-platform, I don't care for single platform tools), but you will be hard pressed to find something to watch your Notes client communication short of with the notes.ini settings CLIENT_CLOCK=1 and DEBUG_CONSOLE=1 and other settings. Enter the power of the R8.x sidebar and the creativity of Keith Smillie. He created a nice sidebar plug-in called RCP Monitor
RCPMonitor by Domiclipse.png

Once installed it allows you to see exactly what is going on during communication between the Notes client and the Domino server(s). Very educational to watch replication, online access, response times and data volume with and without compression. While you give Domiclipse a visit, check out DocViewer, Repton, DXLExplorer, DXLExporter,DXLImporterand the famous LOLcode editor. Keith includes installation instructions and access to the source code. I haven't tested it on Mac, but it runs well on Linux. Well done!


Communicating with IBMers and Lotus professionals using Sametime

QuickImage Category
You can communicate with IBMers and Lotus professionals using Lotus Sametime. Chris Pepin had published the instructions how to do that long ago. However there is more than this one option.
Update: Thanks for all the comments. I've reformatted and updated the blog post accordingly.
  • IBM External Sametime Server: You need to have an IBM id, to get one register online. Once you have it, create a (new) community in your Sametime client (see below). Thereafter lookup your IBMer to add him/her to your buddy list.
    • Server/Port: / 80
    • Advantage: You can reach any IBMer using Sametime, surprise them.
    • Disadvantage: Availability is not production level
  • BleedYellow: It won't connect you to all IBMers, but to the Lotus community at large. You can register at BleedYellow, a Lotus connections site courtesy of Group Business Software (part of them formerly known as Lotus 911). Once you have your id you not only have Sametime connectivity but also access to a full Lotus connections experience. After authenticating you want to add 2 public groups to your client: "IBMers" and "YellowBleeders". The first gives you access to all IBMers who have registered on BleedYellow, the later to the whole community.
    • Server/Port: / 1533 or 80
    • Mobile/Web clients: / 80
    • Advantage: More than Sametime, choice of ports, larger Lotus community
    • Disadvantage: Not all IBMers are there
  • Lotus Greenhouse: is the test drive site for Lotus products. It allows to testdrive a lot of Lotus' latest software including iNotes, Traveler and Portal. You need to register.
    • Server/Port: / 1533
    • Advantage: As Mark said: lots of IBMers there. One registration to try Lotus stuff
    • Disadvantage: Not a production level environment
  • Lotus Live: IBM's cloud offering for collaboration, Lotus Live features eMail, Activities, Files, Contacts, Meetings and IM. You need to be invited, have a trial account or be a subscriber (yes I know: the "buy now" button is missing).
    • Server/Port: / 1533
    • Advantage: Production level environment, great for external communication (I love activities and files)
    • Disadvantage: Ultimately you need a paid account. Not too many IBMer there yet.


Attention Central - A concept for user driven notifications

QuickImage Category
The probably most popular category of application build with Lotus Notes and Domino are workflow applications. Those applications need to notify respective users about changes in status and need for user action. In other words they are attentions seekers. Early or simple workflow applications simply used code like @MailSend( DocApprover ; DocRequester ; CentralArchive ; "Request for approval: "+subject ; "Please approve leave as specified in the request from "+docRequester ; Description ; [IncludeDocLink]). eMail wasn't such an annoyance and the world was good. Later more sophisticated approaches using NotesDocument.send were added, but basically the principle stayed the same.
With the raise of alternative notification mechanism more options were added. Today we eventually find notifications using RSS/ATOM, eMail, SMS, Text-To-Voice calls, automatic todos, Tweets or Instant Messaging (I haven't seen a workflow application writing on your Facebook wall so far). All these attention grabbers have in common, that they are part of the application and what I would call "publisher driven". I would like to make the case for a change and suggest an intermediary that takes in all the notifications of all the enterprise applications and lets users decide how and when (and when not to be notified). Since notifications are designed to attract your attention I call this intermediary "Attention central". You could see attention central as precursor or filter for the "river of news" envisioned by IBM's project Vulcan. The interesting difference: it is possible with the Notes and Domino infrastructure you have today.
What properties would such an application have? Here you go:
  • Universal Interfaces: Applications can deposit notifications using REST, eMail, SOAP, Sametime or MQ. Libraries for LotusScript, JavaScript, Java and other languages make deposition simple (actually with a REST API any active language can do that)
  • Universal delivery: Notifications can be delivered to users via eMail, Sametime (both pull and push), SMS, as RSS/Atom feed, using MQ or being polled through a web service
  • Application Profiles: Applications that are allowed to use the service have a profile where the default mechanism for notifications is defined. The profile can specify what channels users can choose from.
  • Prioritization: It can be specified in what sequence notifications are delivered. E.g. "use Sametime if user is online, otherwise use email" or "Use Sametime, wait until user is online" or "Use eMail unless OOO is active, then send one summary when back"
  • Conditions: Use RSS for normal priority, use Sametime for "urgent"
  • User profiles: Not every user would bother, so the user profiles are optional. In a user profile the defaults for the given user can be specified and how they are mapped to application defaults. e.g. "If app default is eMail, send me one summary per day, but notifiy per Sametime if urgent"
  • Delegation: Users can specify on an application per application base or as a default if notifications should be delegated. A delegation can be temporary (from/to), or conditional (if OOO is active or this webservice returns true) or permanent (my PA handles all leave requests for my team)
  • Transparent: Regardless of notification channel a summary of notifications would be available via ATOM/RSS and on the main site
  • Transient: Notifications that have served their purpose would be removed from plain view (that might need some adjustments in the participating applications)
  • Integrated: The UI (profiles and notifications) are available a component for composite applications or iWidget integration. Workflow applications can make the notification settings part of their own UI without actually storing them other than in Attention Central
  • Distinct: Applications that call the API can specify what quality the notification has: progress report, completion, action request etc. This information can be used to define notification channels
Once you get the general idea the list of ideas will grow. Definitely it will allow to manage the stream of attention and action demands more effectively. To be clear: The application would handle notifications only. It wouldn't grant access or perform workflow steps or anything else since this would require too deep changes in your existing workflow architectures. Also the hallmark of flexible application architectures is: one module, one task. Of course you might end up with your deputy getting notifications of events where the originating system doesn't allow access. But that is already the case today if you just give your deputy access to your inbox, so Attention Central doesn't degrade from what you have now.


Domino URL Cheat Sheet

QuickImage Category
While answering a question on StackOverflow I had to retrieve Domino's URL sytax. Here are the essential links:


Upgrading ODS43 to ODS51

QuickImage Category
The most talked about space saving feature in Domino 8.5 obviously is DAOS. The lesser talked space saver is ODS51. ODS51 features (not neccesarily introduced only then): Design compression, document compression, LZ1 attachment compression. I put it to the test with my very own mailbox (happily running R8.5.x and ODS51):
ODS51 mailbox with 5261 documents at 197MB
The same database after creating a copy in ODS43 (R6/R7) format. Nota bene: it doesn't contain any view indexes yet!
ODS43 mailbox with 5261 documents at 357MB
If you do the math: 357 -> 197 => 45% reduction in size. I don't store attachments in my mail file. Files that don't directly go to Quickr, LotusLive or Lotus Connections are taken care of by IBM's My Attachments tool effectively moving them to their own database.


Comparing Microsoft and IBM deployment diagrams - Part 1

QuickImage Category
Depending on whom you listen to TCO is the lowest for [insert-your-product-of-passion-here]. Robert Sutton (a must read, yes - every book) suggest to base management decisions on solid evidence. So what is part of TCO? Hardware and software prices, real estate in the server room, electricity and cooling, labour for the admin for regular operations, patches and upgrade as well as opportunity cost for downtime. I'd like to shed some light on the hardware side. In a loose series will compare the hardware deployment for various IBM and Microsoft scenarios. For starters I use a comparison of plain Domino with plain Exchange. In a later post I will add mobility, instant messaging, collaboration and whatever comes to my mind. My first scenario is a single location with 12000 heavy mail users who demand high availability. Since I'm not a Microsoft expert I need to poke into public available resources for the deployment diagram and I guess I'll stand corrected in one or the other case. The first thing I had to learn about Exchange is that an Exchange server can have different server roles and that it is considered best practice to split these in large installations. HP's sizing tool for Exchange 2007 was used for the Exchange estimates.


Frozen Notes Applications

QuickImage Category
While discussing AutoUser, the top tool to run functional test for your Notes Client applications, with Lucius (Smart Touchan's CEO) we compared Notes (pun intended) about the state of Notes applications. We identified a phaenomenon which Lucius labeled "frozen Notes applications" (Accenture seems to have still quite a lot of them).
A frozen Notes application isn't one that is crashing or mis-behaving. It is one, mostly at the core, supporting a critical business process with tentacles interfaces in all directions. Since it has grown over the years there are quite some layers and imprints of developer generations in it. It typically does what it should do well, but there is great reluctancy to touch it. Its interface shows signs of age and enthropy. Neither adding or altering features nor moving it to a new version is considered a risk worth taking by the IT people (I actually have seen R4.6 servers still in operation for exactly that reason). Updating this application is what Wikipedia calls Brownfield development (a term borrowed from civil engineering).
What to do in such situations? The tempation is great to just rewrite the whole thing on a platform that is more in vouge. Joel Spolsky has warned against such attempts and reports coming in of rewrites (regardless of what platform, just ask about VB to VB.NET rewrites) are not very encouraging. A step by step modernisation looks more tedious in theorie but appears to be way more robust in practice.
So what is the right course of action? Document your apps (watch this space for an upcoming feature about that) and put them into a test harness. Way back there used to be LoadRunner (not sure if it is still around) supporting Notes clients. Now there is AutoUser wich is capable of testing applications against a multitude of clients. Refactor your applications when adding new features. Lucius is musing to make AutoUser more widely available. I'm looking forward to that.


Understanding the Notes directory structure

QuickImage Category
In the beginning there was C:\Notes and C:\Notes\Data and a notes.ini somewhere in the path pointing to the data directory. And live was good. Over time things became more complex powerfull, the notes.ini moved to the data directory (fun when upgrading) to accomodate multi-user installs. Notes 8 arrived and Notes turned into an Eclipse RCP application uniting the classic C core with a flexible UI based on IBM's Expeditor Framework. Now there is a lot more to Notes (my program directory totalling 23768 entries using 958 MB - that's client including Symphony and Sametime on Ubuntu 9.10). While you still can wipe the program directory and reinstall everything, you want to maintain all data that are relevant. In a recent Domino WIKI entry the meaning and purpose of the structure of your data directory is explained in detail. Makes an interesting read.


Address Dialog on stereoids

Over the long live of Lotus Notes the address dialog has been a trusted, never changing compagnion:
The Notes address dialog
Over the years it got some facelift like the ability to drag and drop or the ability to sort by Notes hierarchy, language or corporate hierarchy (anybody seen that?) or to look at details in yet another window. It works OK as long as the number of names is small and diverse. However once you enter cultures that are notoriously short of last names and sport large companies it gets tedious to pick the right Mr. Wang or Mr. Lee. In short: the address dialog is overdue for an overhaul. Since simplicity isn't simple careful considerations are needed. How much more complex the dialog could get to stay reasonable and how simple it needs to be. The requirements I came up with can be quite conflicting.
  • I want to be able to filter the search scope based on a series of criteria:
    • by organizational properties (like department name or org type e.g. "sales"
    • by geography
    • by job role
    • by tags
    • by favorites
    • by source: internal/external, groups, individuals, from Facebook
    • by communication history: frequency or date
    • by search in communication
    • by search in profiles
  • access to list by first name, last name or nick name
  • suggestion of addresses from social analytics
  • display of additional information (address card + tags + list memberships) to positively identify the receipient
  • Indication if encryption is available or not
  • Indication if alternate access (e.g. shared communities) is available
I'm sure there are more criteria available. Packing all of them into one dialog might be a challenge. I played around and created a mockup


JavaScript Closures

XPages allow you to use JavaScript closures in both client and server side code. There are plenty explanations available, which seem to make them rather more mysterious. After looking at a lot of them I conclude: "JavaScript closures are like riding a bicycle: nearly impossible to explain, but obvious and natural once you get it". So here is my try, from a LotusScript perspective.
First step in the puzzle is the fact, that there is no difference in JavaScript between a function and a class. To get an instance of a class/function you simply call the function with new:
var myInstance = new MyFunction();
The second piece is the fact that calling a function is the equivalent of calling the default method of a class instance. Default methods are absent from Java, dotNet or LotusScript, but were available in VB (typically .text). So myInstance() actually is myInstance.defaultMethod(). The third piece is to understand the functions that return a function/object as sort of factory pattern with an abbreviated syntax. In LotusScript you would write something like:
Option Public Option Declare %REM Class BaloonFactory Description: Creates all sort of Ballons %END REM Public Class BaloonFactory Private currentGas As String Private currentColor As String Public Sub New me.color = "Red" me.gas ="Helium" End Sub Public Property Set Color As String me.currentColor = Color End Property Public Property Set Gas As String If Gas <> "Helium" And Gas <> "Hydrogen" Then me.currentGas = "Helium" 'Let us play save Else me.currentGas = Gas End If End Property Public Function makeBaloon As Baloon Set makeBaloon = New Baloon(me.currentColor,me.currentGas) End Function End Class %REM Class Baloon Description: We like them in a lot of colors %END REM Public Class Baloon Private currentGas As String Private currentColor As String Public Sub New (Color As String, Gas As String) me.currentColor = Color me.currentGas = Gas End Sub Public Sub POP MsgBox "You popped a "+me.currentColor+ " baloon filled With " + me.currentGas End Sub End Class %REM Sub Demo Description: Comments for Sub %END REM Public Sub Demo Dim factory As New BaloonFactory Dim b As Baloon Dim r As Baloon Dim g As Baloon Set r = factory.makeBaloon() factory.Color = "Blue" Set b = factory.makeBaloon() factory.Color = "Green" Set g = factory.makeBaloon() Call r.POP() Call b.POP() Call g.POP() End Sub
This LotusScript was converted to HTML using the ls2html routine,
provided by Julian Robichaux at


DAOS and Transaction Logs

QuickImage Category
When you want to use DAOS in Domino 8.5.x you need to activate Transaction logging. DAOS allows to store attachments on cheaper disks and eliminate duplicates (Not sure if you want DAOS, use the estimator, it will tell you). BUT DAOS requires transaction logging. Transaction logging requires, to be performant, its own disk. The best practices guide states it very clearly: "It is absolutely essential to place the transaction log directory on a separate physical device devoted solely to transaction logging. Part of the performance improvement you will gain from transaction logging depends on having a separate disk, which allows fast writes to the transaction log files. Putting the transaction log on the same drive as other files — the operating system, the Domino executables, the Domino data, the OS pagefile, etc. — can result in a significant performance degradation even in comparison with an R4 server, since the server must read from and write to these files as well.".

So speak after me:"I shall use a dedicated drive for my Domino transaction log. I won't use a partition, a directory or a SAN destination. I shall use something insanely fast for this drive.".

No ifs or buts or eventuals. So the DAOS savings require some disk investment. While you are on it: make sure you use ODS51 and compression for design and data - and review the design for high performant Domino servers (Raid 10 works best). Others have recommendations too.


Myth Buster: NSF doesn't scale

QuickImage Category
In a lot of customer discussions I hear: "Oh we need to go RDBMS since Notes' NSF doesn't scale". Which is an interesting statement. When digging deeper into that statement, I get a number of reasons why they believe that:
  • Somebody told them
  • They had a problem in R4
  • The application got slower and slower over the years (and yes it's that same old server
  • The workflow application using reader fields is so slow
  • They actually don't know
Then I show them this:
More than 1.6 Million documents in a NSF
(atually not the graphic, but the live property box using my Notes client). The question quickly arises how the acceptable performance of such a database can be achieved. There are a few pointers to observe:
  • Watch your disk fragmentation (troubleshooting tip on the Notes and Domino wiki)
  • Be clear about your reader and author fields usage. In case the RDBMS fans insist on their solution, ask them to build the reader field equivalent in RDBMS and measure performance then.
  • Watch your view selection formulas carefully. (You don't use @Now, @Today, @Yesterday or @Tomorrow do you?)
  • You want to use DAOS to keep attachments out of the NSF (helps with fragmentation) -- don't forget to buy a disk for your transaction log.
As usual YMMV.


Domino and Disk Fragmentation

QuickImage Category
Domino is by nature an I/O intensive application. While R8 drastically improved I/O it is still a lot of writing going on. A lot of Domino servers run on Windows and use NTFS as file system. NTFS isn't exactly designed to run large ever (size) changing files (this is why a RDBMS usually pre allocates the table space to be used - and it is done on new machines). Now just imagine that you would need to ask your admin to pre allocate your 2GB mail file quota for every user (your disk sales guy would send you to heaven for that) regardless of current need. So an NSF reflects the actual size. It's kind being in-between a rock and a hard place. Enter performance improvement strategies. An old myth is that compact -C will improve performance. While it makes the NSF neat and tidy it splatters the file segments all over the place. Adam Osborn explains it nicely in an blog entry. There is a video about it. So what should you do:
  1. Learn! Know the Domino Server Performance Troubleshooting Cookbook inside out. Watch Andrew's presentation on Domino performance. Ask Auntie Google
  2. Move your temporary filed onto another disk (I would love to hear how that works if you use a RAM disk or a solid state disk). It has 2 effects: I/O is better distributed and the temporary files don't contribute to the data disk's fragmentation. You need to set two notes.ini parameter: NOTES_TEMPDIR and View_Rebuild_Dir
  3. Build a high performance Domino Server (remember RAID10 looks good) and tune it well
  4. If Windows is your server operating system: defrag, defrag, defrag! There is an excellent presentation (don't mind the fonts) by Albert Buendia of the Spanish Lotus User Group SLUG about it. Realistically you have two choices. The free (and "find-you-own-support") DominoDefrag on OpenNTF (managed by Andrew Luder, Australia) and the commercial DefragNSF from the Australian IBM Business Partner Preemptive Consulting (run by Adam Osborne, Australia) Defrag.exe on Windows isn't an option for executing (only analysing) on your 24x7 box.
It would be interesting to see a fragmentation shootout between NTFS, JFS and EXT4 (anybody listening @ developerworks?)


Notes forms and Notes documents

QuickImage Category
Sometimes it is good to go back to some basics to explain what Notes and Domino are about. A Notes database is a schema free document store with build in message routing and document level access control mechanism. I'd like to shed some light on the nature of the schema free functionality. A Notes document (technically a note) contains any number of items. There is no need to define possible items. You put an item into a document and it is there. Items have a name, a data type, a sequence (used for replication) and an array of values. That is one of the really strong points in Domino (shared only by Adabas and FileMaker): you don't need to create a master-child table combination, but just store multiple values directly into the document. So a note is conceptually very close to a schema free XML document. The usual way to add an item to a note is to use a form. In a form the developer defines fields (which have a data type, name and rendering properties as well as processing rules: default values, transformation and validation conditions) that get mapped by name onto items. Note the difference: in a form we have fields, in the document (note) we have items A Notes form is basically a piece of RichText with fields thrown in. You can create a working form without the need for any buttons and actions, just throw in some fields and text around them and you are done. The Notes client's default function allow to open, save and close a form. When you open a form and save it a notes document is created with one item for every field in the form. Fields without content result in items with empty values.
Saving a new document
In addition some system fields (e.g. for the user saving the form $UpdatedBy) and the field Form containing the form used to create this document is saved. The form field is the only link between the form and the document (If the form has an alias, the alias name is used - handy since different forms can share the same alias. That link can be altered.


Preparing for an IBM compliance audit of your Notes licences

IBM is a business enterprise and sells software licences for a living (For charity you need to ask a certain gentleman from Redmond). Sometimes there are disputes between IBM's customers and IBM how many licences are in use in a certain company. In the spirit of IBM's 3rd core value - Trust and personal responsibility in all relationships - IBM typically tries to sort this out in collaboration with the customer, but from time to time that escalates into a formal compliance check. It has been reported not to be a necessary pleasant experience since for legal reasons outside audit organisations are used who might not have been tuned into IBM's approach to relationships.
In Lotus land a potential sticky point is the question how many Notes Messaging CALs and how many Notes Enterprise CALs (formerly known as Notes Messaging Licence and Notes Collaboration Licence) are in use. The difference between the two licences in a nutshell: Messaging allows eMail, Discussion, Blog, Reference and Journal; Enterprise allows any database and the use of Domino Designer (Read the full details).
An audit will not accept statements like "yeah, the licences are stated as Enterprise for all, but only X percent actually use databases"
What you can/should do in preparation: add all users you think are messaging only users to a group (or a set of nested groups) and add this group to the ACLs of all databases that are not based on the IBM templates above with -No Access- level. This way you rapidly can figure out if there is any user requiring access to those applications (usually accompanied with a scream about IT messing things up).

Also be clear: IBM doesn't require an online activation, disables software when you alter the hardware or a patch failes or sneaks in controlware to check on your licence status.

And of course the way to smart collaboration is not to downsize your licenses, but to upsize your collaboration using some of the excellent free and commercial applications for Domino (Bonus track for customers in Thailand and Hong Kong: Check out Comware).

Disclaimer: This is not any legal advise or something you can cite in an argument, this are just my thoughts


Has replication as we know it reached the end of its usefulness?

QuickImage Category   
In a fully connected, always on, any device world there doesn't seem to be space for replication (or synchronization as it is called by other vendors). So why not scrap it and move on? I'm sure no one will miss the little diamond indicators denouncing a replication conflict.


Supercharge your Notes Archive in 7 easy steps

QuickImage Category
Archiving in Lotus Notes came a long way. It is easier then ever once you got it setup right (if you were nice to your Notes administrator (s)he would have setup that for you using a policy). So here is the trick. Create one folder named Archive. It is case sensitive, so archive or ARCHIVE won't do.
Step1: Have a folder named Archive
You now can drag & drop documents into that folder. The standard Notes mail template contains code to recognize when you want to drop into that special name.
Step2: drag documents into the Archive folder
Tip: when dragging from other folders than the inbox use Ctrl+Drag, so Notes will retain the folder information (and yes - I'm getting myself a Huawei E5 router). You will be prompted to archive the documents:
Step3: Say yes or no to archive your documents
While this is sweet and easy you would now have to wait until archival is complete. Waiting for a computer to finish its work is the last thing I want to do, so I say NO. With a few extra steps the documents still get archived away. Let me show you.


Sending Notes email as MIME messages

QuickImage Category
In the beginning there was Notes. And Notes was used inside the corporation only. And things were good. Then came the time to link Notes to other eMail systems. These other systems only supported text. And things were good. Then MIME arrived. Notes added MIME conversion and trouble started <vbg>. A lot of admins didn't like MIME since it required additional bandwidth and opened up a can of potential worms. So they switched off the incoming or outgoing mime conversion. And forgot about the settings. So today many users grief because their nice messages are dumbed down to text. Fixing that is easy as 1-2-3 once you get your server configuration document into edit mode:
Configuring Notes to send out MIME email

If most of your communication partners use Notes on the other end of the pipes, you might consider to encapsulate the original message by default (your bandwidth watchdog will hate you for that)
Encapsulate Notes messages via MIME multipart

For the highest fidelity in RTF to MIME conversion you want to have a close look at iFidelity.


How does your (public) application look in [insert-browser-and-version] on [insert-os-and-version]?

QuickImage Category
Testing web applications is a tedious job. Gazillion combinations of browsers and operating systems can turn that into a nightmare (suddenly a rich client doesn't look that bad anymore). Scott Hanselman has a very good blog post on testing considerations. One of the services he introduces in the post is Browsershots that allows you to take screenshots of your site using many combinations of browsers and operating systems. The only catch: the site must be publicly accessible and the free service will publish your images. There is a private service available for a fee. Neat tooling for visuals. A similar offering is available from MultiBrowserViewer which requires a client (Win only) installation. Still you need to test functionality and performance. Here you might look at JMeter for performance or Selenium for functional tests. Not to forget HTTPUnit, XPages Unit Tests and of course IBM's commercial tools like Rational Functional Tester and Rational performance Tester (For testing Notes client applications you would use Smart Toucan's AutoUser). Of course performance testing comes with its own set of caveats (Don't bother to test what you can't fix).


Domino and RDBMS

QuickImage Category
The question "What is the best way to integrate Domino with an RDBMS" surfaces quite regularity. With the impending demise of NSFDB2 these are your options:
  1. Domino Enterprise Connectivity Service (DECS - part of Domino)
  2. Lotus Enterprise Integrator (LEI - separate product)
  3. Custom code using LCLSX
  4. Custom code using Java in XPages

So the interesting question is: when to use what?

To get the best performance you need to revisit architecture rather than find solutions for a specific coding problem. When it comes to RDBMS code we find that a lot of developers like to create their own code not taking advantage of optimised code (caching, pooling etc.) provided by the platform. You need to clearly be aware why your code *MUST* talk to a RDBMS. These are the guidelines:
  1. Don't use an RDBMS (1)
    Domino's multi-value field capability can model master-detail records without the need for a relation. the NotesSQL driver makes Notes databases available for report generators that require tables as input. Webservice and XML capabilities can provide many of the alternate use cases. I can have a look at specs to make suggestions how to implement them in standard Notes. We have databases in production with more than a million records and many GB in size. Some performance tuning works wonders:
    • Does my application really need a RDBMS in the background? If you are looking for performance that is probably a no, if you need to integrate into existing systems that is probably a yes. also if you look at transactional applications that would be a yes
    • Does my application need real-time lookup into an RDBMS? In a lot of cases that is a NO. You might need to lookup person information in the RDBMS. In this case you better use that information and TDI (provided with R8) to synchronize the RDBMS with the Domino Directory (there are enough fields for most information *and* there is a customization API build into the directory) and use @NameLookup which beats any RDBMS connection (for the caching have a look here: -- and that was just R6). For other databases DECS/LEI is a good option. So instead of RDBMS code you use normal lookups in Notes. A neat trick here: store the UNID of the new document into the person's record and instead a @DBLookup or getDocumentByKey you can use @NameLookup/@GetDocfield @NameLookup / getDocumentbyUNID for high performance
    • Consider well: would a "on document change" agent be sufficient? Is the result of the transaction needed for the user directly? If not: Use a Java Threat for the connectivity so a transaction can continue without the user waiting
    Don't use an RDBMS (2)
    With XPages (introduced in Domino 8.5) an application now can use data from more than one document or database in a single form easily.
  2. Use DECS
    Does my application CODE need real-time access into a RDBMS? Domino provides DECS out of the box and LEI for a fee. DECS/LEI provide a robust fast way to access RDBMS data without a developer needing to write (and maintain) ODBC/JDBC/LCLSX code. It also takes advantage of connection pooling. in DECS Domino forms are configured to map to relational tables including mapping multi-value fields to master-client tables. Entries exist in Domino and the RDBMS. Any RDBMS would work: DB/2 MySQL, Oracle and other vendors. DECS is a sufficient solution if creation/deletion of records happens through the Domino front-end (date changes, short of the primary key) can happen anywhere. There are thoughts about advanced uses of DECS.
  3. Use LEI
    Similar to DECS but with more options: records can be created/deleted from Domino and/or the backend. Data can be stored in the RDBMS only (Virtual views/documents). Data operations can be scheduled (great for reporting). I like the virtual views and probably will write about them soon(er or later).
    Domino doesn't do transaction(s) handling in an RDBMS. If you need that functionality you need to ensure that on the RDBMS side. A best practice for write access: write documents back into a "command database" that is linked to the RDBMS using DECS/LEI. The table it is linked to would be a auxiliary table with a "on create" trigger. That trigger does on the RDBMS site what it needs to do (and the developer doesn't need to understand anything about Domino. They only need to understand the RDBMS). The trigger would update the "command record" so on the Domino side you can process the result
  4. Custom code using LCLSX
    Developers love this, but I haven't seen a good use case for that. Typically this is used for small transactions where the configurability of DECS/LEI is not appreciated. One case: an existing RDBMS with a lot of triggers/stored procedures can't be amended to incorporate a neat connection to Domino (the best solution: let DECS write in a temp table and have the onInsert trigger in the RDBMS pick the value and call the stored procedure. It separates your Lotus(Script) code from SQL 100%. This way you don't need to mix code and SQL, makes maintenance muuuch easier, saver and cheaper). If you have to code: use LCLSX. Do not use ODBC direcly (especially SQLServer sucks on ODBC, it is build for OLEDB which LCLSX is using for it). Encapsulate into classes.
  5. Custom code using XPages/Java
    You can use any Java class in XPages to connect where ever you want to connect. Suitable approach when you have *huge* RDBMS databases and need to display/render small portions of it at a time (e.g. P/O database with a Domino based approval system). Maintain the connection in one of the contexts (application / session) and use a pool manager (but double-check: is DECS/LEI/TDI a better option?)
As usual YMMV.


Reader fields for large user populations (Extreme Edition 2.0)

QuickImage Category
The last attempt to work with large number of entries in reader fields didn't work that well, so I try again. Andre suggested to limit the number of entries for performance (and after all if there is any breathing human inside the little yellow bubble understanding Domino performance it is him): "The other thing to think about is performance. When a user opens a view, the server has to decide which documents they have access to. So it takes a list of their groups and roles, and compares each of them to the Readers list of each document. If the user is in 30 groups and there are 500 values in the Readers field, that's 6000 string comparisons to determine that the user doesn't have access to one document. Multiply that by the number of documents in the view." So here is the plan:
  • The class signature stays the same as in the previous version. Signature means: the properties and methods and their parameters
  • Every application gets an unique identifier, so distinct group names can be build
  • Once the number of entries reaches a threshold automatically groups are created in the NAB (That group creation for sure could be subject to great debate. Here are the options I was thinking about:
    • Allow direct access to the NAB to create/update groups. This option is feasible when the class is used for a web query save agent and the agent signer has sufficient rights
    • Store the names in Names fields and have a separate "When documents have been created or changed" agent process the changes
    • Use a "Run-on-server" agent and provide the document via the agent context (This also requires to "buffer" the names in Names fields
    • Use a web service to update the NAB
    • Use a mail-in database to handle request
  • The DocumentUniqueIDs of all documents in use are recorded, so access becomes much faster (getDocumentByUniversalid vs. getFirstDocumentByKey)
  • Not considered: check for duplicate groups - as in: check if a group with the same members. That would open a Pandora's box of checks: what to do if one document alters the group?
I updated the class and implemented "direct NAB update" and "mail-in a change request". The class picks on the basis of access to the NAB. Currently it works for server databases only (not local replicas, some more logic needed there) and it doesn't capture the UNIDs of the group documents. Further more it is lacking better error handling and documentation. Have a look at the modified class.
As usual: YMMV.


LotusScript Coding Style

QuickImage Category
I have currently the pleasure of introducing new developers to Lotus Domino. While it is old new to the Yellow Bubble our new members wonder about code style and code smell for LotusScript programming. Looking at a lot of the existing code base they rather get a wrong impression. So we went through a few guidelines. Of course sticking to them becomes a lot easier with the new LotusScript editor in 8.5.1:
  • Every time you skip OPTION DECLARE an angel must die. So for the love of [insert-what-is-holy-to-you] configure your preferences to automatically insert it. I've submitted a request to parliament to cane serial DECLARE offenders <vbg>
  • Keep them short. Functions that carry on over hundreds of lines are a maintenance nightmare. A function does one thing at a time. So the routine that loops through your document collection doesn't process the individual documents, but calls a function with the document as parameter
  • Names need to have a meaning. So variables a, b, c are a bad idea. Same applies to sub1, sub2 etc. There are very few exceptions like s for the current session, db for the database, dc for the document collection, v for the view and doc for the document to work on. Read Joel's write-up on naming conventions.
  • Decompose your applications well. If you think that is a process of biological decay, take a free Stanford University course in computer science
  • Global variables are a lazy programmers sign post (of course no rule without exceptions)
  • Call MsgBox(Err$) is not error handling. It is pure laziness. Use an error handling framework like Julian's OpenLog or Devin's enhanced log, both available on openNTF. Name your error handlers like the function: ON Error goto Err_[NameOfYourFunction]. Use a Exit_[NameOfYourFunction] label as common exit point and have your error handlers at the end of the function (see the code templates below)
  • Comments don't hurt anyone. Missing comments do. Comment the WHY not the WHAT. Use lsDoc to extract your comments into your documentation
  • While commonly associated with Java only know your Software Design Patterns. Know them well. Read a book. Also know your Anti-Pattern (like this one)
  • Make good use of the LIST keyword. LIST are fast and can replace repeated lookups. A list can't contain a second list, but an object with a list as public property, which can contain object with a list as public property ... you get the idea.
  • I like the factory pattern. While the factory implementation is not in line with the pure teachings it makes the code much more readable (see code below)
  • Don't sprinkle your (web) forms with JavaScript calls. JavaScript belongs into JS files and referenced. Also a bad habit is to hard code the form to act on document.forms[0] will break sooner or later (when you add the custom tagging or the portal integration or or or)
  • Don't use document.all.[NameOfSomeButtonYouAddedHiddenly].click() to execute special functions. If you have back-end functions to call use a hidden field [NextAction] and a custom submit button (passthru HTML): <input type="submit" value="Do something special" name="NextAction"> The field NextAction will contain the value of the button. If you are smart you read the label from a config document, so you can change it without breaking your code.
  • While it is convenient rethink the use of the shorthand doc.[FieldName]. While it has (against common believe) no measurable impact on performance it has an impact on readability (if you see doc.something you need to think every time is this a property or a field name) and your ability to switch between languages since Java and JavaScript don't entertain this notion.
  • ViewEntries beat documents in speed any time. If you have to run through a long list of entries and only need to update a few, pack the criteria you need into view columns and run through the ViewEntries opening only the documents you need to change.
Some code templates we looked at:


Distributed Document Locking Web Style

QuickImage Category
Notes and Domino applications often revolve around documents and approvals. When editing a document the application must ensure that nobody else can edit it. In Lotus Notes applications we have the Document locking functions which can even do a distributed locking. Looks good works well, if it wouldn't be for the last sentence in the developer help:Locks are not supported in Web applications. So you need to roll your own. Distributed operation can be tricky. The solution is to use an XPage (or servlet for older versions of Domino) on each server that runs a locking service. This service figures out what's the master lock server and tries to use it. If succesful it issues a lock, on failure to reach it it provides a provisional lock. The master server registers which server has asked for a lock status and notifies them when a lock is removed or renewed. Lock information is locally cached to minimize user delays. The flow would look like this:
Distributed document locking on the web
There are a number of considerations when designing the detailed functionality:
  • Should the document be locked only by the service (for web documents) or also for opening the documents from a Notes client
  • Should the locking service use the master server to lock (yes if you answered the previous question with yes), optional - you could get away with code without Domino classes
  • How to execute the locks: do a Ajax call before opening? Run the check in the QueryOpen? Check on save that you had a valid lock?
  • Should a lock expire automatically (so no unlock function is needed) but the clients need to renew their locks periodically. How long would a lock live (5 minutes in a typical webDAV server)?
  • Should a lock be persistent (the opposite of the expiring locks) and survive a server reboot?
  • Would single sign-on be required (LTPA)?
  • What would a syntax look like both for requests and responses?
This is what I came up with:
Since a user "talks" to their own server the lockmaster has access to session.username. So a relock or unlock request can be verified against the user who made the original reqest and ignore/reject invalid username/token combinations. The response from the lockmaster would be in JSON and rather lean:
{ "status" : "ok|fail",
    "owner":"Full name of the Owner"
    [, "id":"tokenid"]
    [, "expiry": 600]
    [, "msg":"Eventual explanation of the failure"]
The response slightly varies depending on the type of action performed and by whom. Using this approach you should have a fairly robust browser based locking mechanism.


Measuring performance when accessing a RDBMS from Domino

Accessing RDBMS from Notes applications is quite popular. One of our customer asked what is an acceptable performance when accessing an Oracle database from XPages. We discussed test patterns and approaches. Just running one type of test wouldn't give an accurate picture. The total time for a test run is compiled from the raw database access speed (mostly network and the local driver performance), the Java JDBC layer, the XPages engine and the application running on XPages. So you have to test all these components:

A complete system test has 4 phases
So how to test these (you could rip the steps apart on a scientific basis, but they hold steady for practicality). Never forget that the subject and object of an observation influence each other. The bare fact of measuring the performance changes it:
  1. Network and native library: You use the command line interface (in Oracle a terminal like window) to execute whatever query you want to test. Record the time. You can proably automate this using a tool like STAF. This is your base line. If that is already slow, you know network and database might be the problem already (and yes: you could run the test on the DB server too eliminating the network in between too).
  2. JDBC and Connection pool: You don't write your own Connection Pool manager? Using a few lines of Java again from the command line will show how your VM is doing and how well the JDBC driver talks to the native libraries. You would already have a class that implements the Java LIST interface (preferably as virtual list), so you can use that as a data source later on. Heap size is a good candidate to tweak performance here
  3. XPages engine: You have code in one button instantiating the class you already used in the previous step and execute the same queries. This will show you the difference between "raw" Java performance and the XPages engine
  4. Your XPages application: All your code is here, your events, your JavaScript. The difference to the previous step highlight optimization potential in your code
Don't forget: running a test once isn't real. you need to have multiple iterations and use tools like JMeter or Rational Performance Tester to simulate concurrent user load. As usual YMMV.


Showing Categorized Views in Web Applications

QuickImage Category  
Categorized views are kind of a trademark of Lotus Notes (Client) applications. We like them, we build them, we love them. We also want to see them on the web. There is only one small issue: That display of information is pretty unique to Notes. You do find this tree/table combination in other applications only to display files (like Nautilus, Finder or KDE) but not data. So a categorized view is kind of odd on the web. I played around to find alternate displays for single and multiple category data. Here is what I came up with.
  • Single Category view with Listbox
    The category isn't part of the view itself but a picklist on the left (which might be filled by a @DbColumn). The table shows the current selection matching the selection. Works out of the box already today. Interesting extension challenge: allow selection of multiple entries in the listbox.
    Single Category View with Picklist
  • Single Category view with Combobox
    A variation of the first theme. Useful if you have a lot of view columns and need the real estate on the left. Variation: Instead of the dropdown: show a link that uses a popup to allow selection of the category to show.
    Single Category View with Dropdown combo
  • Multiple Categories with Combobox
    This actually works also with a sorted view since the limit key takes an array as entry. As variation similar to the previous example could be to show a breadcrumb link list that allow to click or hover to show the selection. Challenge: How to make it obvious that you need to select/narrow from left to right. Extra challenge: when you change box 2 and the value in box 3 is no longer an available value
    Multiple Categories with Combobox
  • Multiple Categories in tree/table combination
    Looks like a file dialogue, so users should be familiar. You trade horizontal space for vertical space. Makes navigation in categories more accessible since all categories are available any time
    Multiple Categories in tree/table combination
  • Pivot view on 3 categories
    Category 2 becomes the columns of the table, Category 3 the rows. Adds sums to rows and columns. Good base material for graphs. Challenge: decide on the aggregation mode: sum/average/percentage -or- optional display of such a column
    Pivot view on 3 categories
  • Pivot view on 3 categories with data rows
    Similar concept like the previous but with individual data rows displayed. Might show additional aggregation rows or columns
    Pivot view on 3 categories with data rows
Now someone needs to build all these custom controls. All the images have been build using Balsamip Mockups


How much bandwidth does a Domino server need?

QuickImage Category
We get this question quite often lately obviously driven by who-does-not-need-to-be-named claiming lesser bandwidth requirement. Of course that is utter nonsense. There is no instrinct bandwidth requirement in Domino. After all Notes servers were happily running on 4800 Baud modem connections. Bandwidth is the speed, so when you rephrase the question, you see it is missing half of it: "How fast should [insert-your-favorite-land-transport-vehicle] be?" The logical reply: to do *what*? So when looking for bandwidth requirements you need to know: how much data do I have and in what amount of time do I need (or want) this data to be delivered. So step 1 is to compute these values:
  • Average requirements:
    [Average/Median number of messages per hour] * [Average/Median size of message] / (360 * [Acceptable average delivery time in seconds])
    The 360 is to adjust hour to seconds.
  • Peak requirements: [Peak number of messages per hour] * [Peak size of message] / (360 * [Expected maximum delivery time in seconds])
These formulas are product independent. Now you can apply additional factors. E.g. when messages are transmitted using SMTP/MIME attachments swell by 34% due to the mime encoding. Notes compresses documents, data and network communication and can save 5-70% of transmission size. Why this big spread? Well when you transmit a compressed archive file, there is little you can squeeze out it, a old MS-Office document on the other hand can loose 80% of its size when compressed. There are a few caveats of course:
  • Corporate habit: we see very often that 80-90% of messages are retrieved in just 2 of 24 hours. so when you calculate 24000 messages/day you mis- calculate your average to be 1000 messages/hour while your true average is 9600 in the relevant hours.
  • You underestimate your growth. What might have been enough 3 month ago might not be good enough one year in the future. (IBM internally seems to be a big difference. Using Lotus Quickr and Lotus Connections we actually see a decline in message volume)
  • Management (or user) expectation: They would expect prompt delivery event for the biggest messages at peak time (ties a little toward the first point)
  • Bandwidth availability. This is mostly an issue on VPN connections. The nominal speed my ISP bills me is far higher that what I ever able to get.
What's your bandwidth experience?


NotesDatabase.FTSearch vs NotesView.FTSearch

QuickImage Category
When looking for data in a Notes database using LotusScript you have 3 possibilities: NotesView.getDocumentByKey (and its cousing NotesView.getEntryByKey), NotesDatabase.dbSearch and [NotesView|NotesDatabase].FTSearch. Use each of them wisely (when to use what might make another nice post). I want to focus on FTSearch for now. One would expect that a fulltext search against a view is substantial faster than against a whole database. However the results are quite different. The fulltext index is build once for the entire database. So even a FTSearch against a view will use the Index for the whole database. Searches are actually very fast. What makes a difference is how the results are processed and how you intend to use the result. If you plan to list them all out, there is no real difference. If you only want to show a subset continue reading. Very often a search returns 1000 documents but you only want to show 20 or 50 at a time. The core search returns an document collection that contains the UNIDs but not the document objects. When you use db.FTSearch the document objects are initialised when you loop through the collection as do a NotesDocumentCollection.getNextDocument(doc). So if you only use a fraction (50 of 1000) you would only initialize document objects 50 times. On a NotesView.FTSearch on the other hand the collection gets fully initialised since Domino needs to check if the document meets the view's selection criteria. If it matches the document stays in the result collection, if not it gets removed. So even if you would only use a few documents you would need to bear the waiting time of all document object initialisation calls. Unfortunately if you need a very specific sorting sequence you need to stick with a view. Andre gives more advice. In summary: in cases where your expected results are much bigger than what you want to show NotesDatabase.FTSearch beats NotesView.FTSearch


Space Savings in Notes 8.5 using ODS51 and Compression

QuickImage Category
We all heard the stories about DAOS space savings. In case you missed the party, go download the estimator and run it against your server. But there is much more to the R8.5 release. I'm using the new ODS structure (ODS51) also locally (simply add CREATE_R85_DATABASES=1 to your notes.ini and run a little script). My mail file as of just now has about 6000 documents, almost no attachments (I'm using MyAttachments) and design and data compression enabled. I have tons of folders that I use regularly, so there is quite some view indexes in that NSF.
Database in native Notes 8.5 format, compression enabled
Out of curiosity I replicated the mail file back to a test server running Domino 7.x. Domino 7.x runs ODS43 and doesn't support document or design compression. That same database swell to 273 MB. This is an increase of 50% (or if you walk the other way around: Moving from ODS43 to ODS51 saved me 33.3%). That's a very compelling reason to act. Database in Notes 6/7 format ODS43
As usual YMMV


Running one agent as user or anonymous

QuickImage Category
An interesting question landed on my desk: "In a web application you use extensive agents to prepare or render content. When the user is authenticated you want that agent to run in the user's context. For anonymous users you want the agent to run in the agent signers security context, so you don't need to expose your resources to anonymous browsing. How to do that?"
  1. Write all your business code in functions in a script library. Have one function as entry point. It is a matter of taste to use the documentContext as parameter or to retrieve that inside the function.
  2. Write two agents that both call that function: "agAnonymous" and "agUser". Configure the first one to run with the signer access, the second one to run as web user.
  3. In your WebQuer[Open|Close] use this formula:
    agentToRun := @if(@UserName = "Anonymous";"agAnonymous";"agUser");
As usual: YMMV.


Minimizing your server downtime when upgrading

QuickImage Category
We all love upgrades. They steal the nights, the weekends and the public holidays. But it doesn't have to be that way. For this post let us presume you are running Domino 6.x and want to upgrade to Domino 8.5. While Domino 8.5 can very well run on the same hardware specification as a Domino 6.x server, your server is showing its age and anyway is to small for the data growth you experienced during the last 5 years. So you bought a new box. With that you can get away with a downtime of 10 seconds in 6 easy steps (or phases). Here you go:
  1. Phase 0
    This is where you are now. That phase had a duration of 5 years and ends with the management approval to buy a new box and its delivery to your closet data center.
  2. Phase 1
    You install and configure a brand new Domino R8.5 server with a new and a new IP address. In our sample that server is called tmpServer/YourOrg at IP Make sure to check the following:
    • You make that new R8.5 server the directory server of your Domino Directory
    • You have added CREATE_R85_DATABASES=1 to the notes.ini and made the server a member of LocalDomainServers and/or your server group. You have checked all databases to include the group as manager in the ACL
    • You made sure that the new server has the same access (ACL level, roles) to your databases like the existing one
    • You replicate all databases from your existing server to the new server
    • You verified (using document counts) that all documents indeed have been replicated (You could have an agent checking unids if you want to be fully save)
    • Ensure that in your database the properties for LZ1, data and design compression are set. It has no effect on the existing database if you have to set it until you do a compact -c or create a new replica
    • Update (thanks Matt): Verify that your replication formulas made it. Also check your Unread marks.
    • The server document allows only a group "UpgradeProject" access to the new R8.5 server. This group contains LocalDomainServers and the team that does the upgrade
    • You copy both server ids to each of the servers
    Phase 1: the new server goes on stream
    Duration: Anything between a few hours and a few days during regular working hours.
  3. Phase 2
    You unplug the network cable from the new server, shut down the Domino server. You edit the IP configuration of the server to use the existing server's IP address (this is why you unplugged the cable in the first place). You edit the server's notes.ini and point to the existing to be used. Check the notes.ini if you need to set more parameters (e.g. network port addresses for partitioned servers etc.). Restart your server but keep it unplugged.
    Phase 2: the new server gets unplugged
    Duration: 5-10 minutes (depending on how long your server needs to boot), best during off-peak hours since Phase 3 needs to follow immediately.
  4. Phase 3 (The Downtime)
    Best you do that together unless the boxes are close to each other. You type in the Domino server console: Set Config Server_Restricted=1 (thx Thomas) and Drop ALL (If you are a nice guy you warned your users with a broadcast message before you do that. Remember: a (!) sends the broadcast to a dialog box). Your server is now clean of users and no user session can be opened. Pull out the network cable and plug in the network cable of your new server. Since the server name and the IP address is the same as the old server before no other configuration needs to be done. You are back in business.
    Phase 3: The new server goes online
    Duration: 10 sec (If you type and plug fast even shorter).
  5. Phase 4
    You edit the notes.ini of your old server and point to the temporary You edit the IP address of the server to point to the temporary IP address. You remove the Server_Restriced line from the notes.ini. You shutdown the server, plug the network cable back in and reboot it. A final replication with the new server makes sure that anything created in Phase 2 is back where it belongs. You have full working access to the "old" databases now.
    Phase 4:check that everything works
    Duration: Take your time. Check everything.
  6. Phase 5
    Everything worked out, you reformat your old server and give it a new lease of life.
  7. As usual: YMMV. Update Elaborated Phase 1 bases on the comments below.


Picking your routing and replication architecture

QuickImage Category
Routing and replication is a core function of the Domino server. There are a few basic facts you need to consider when setting them up. A very popular setup, which I quite like too, is a hub spoke architecture: a central spoke communicates with spokes, so any communication has a maximum of 2 hops. An additional advantage of such an architecture is, that a server only needs to "see" the hub to reach any of the other servers. But even for a hub-spoke architecture there is room for improvement. Let us make two assumptions: the hub is where your SMTP mails arrive and depart (so "hub" could be a cluster) and all servers can see each other on the network.
  1. Mail Routing
    The normal Hub-Spoke routing has the advantage, that every email message is delivered with the maximum of 2 hops and that you only need 2 connection documents per server. It also works independent from network architecture or IP ranges.
     Hub Spoke Routing
    Of course - you need connection documents and every message will use 2 hops. If you have a lot of large internal traffic you create a bottleneck at the hub (even with multiple mailboxes). When all Domino servers "see" each other direct routing will be more efficient. To setup direct routing you need to put all participating Domino server into the same Notes Named Network (which implies that they all share the same network protocol - not an issue with TCP/IP everywhere today). You won't need connection documents anymore and routing is instant and direct.
    Notes Named Network Routing
    Are there reasons (beside obviously wanting of the prerequisites) why you wouldn't want to use direct routing? When you use your hub to perform central functions like virus scan, content integrity, enterprise archival, compliance check etc. you want every message to pass through your hub. Be careful what you wish for, you might just create your next bottleneck.
  2. Replication
    Replication comes with many choices: What server starts it, pull only, push only, pull-pull, pull-push. So it can get a little confusing (should I draw a diagram for you?). Hub Spoke replication architectures make a lot of sense. With 2 cycles all replicas are current. This is especially important for your system documents. However you need to be clear about how you set it up. The typical setup is the hub server to be scheduled to replicate with the spokes. From the 2 options: pull-pull and pull-push we find mostly pull-push. This means the hub does all the work.
    Hub Spoke Replication
    Since the hub doesn't do anything else (eventually run some central agents?), that seems a sensible choice. However it has a large pitfall. By default there is one replicator up and running per server. So your Hub is replicating with one spoke at a time. You can increase that number to 4 but you will with large changeset and many spokes run out of your replication time window. On the other side of a replication there is just another user session (watch your console, you will find: "Session opened for server Hub/MyCorp"). So when you turn the replication direction around it scales much better.
    Spoke Hub Replication
    When you schedule a pull-push replication on every hub, you can schedule all replications at the same time, since they are just sessions on the hub. You still will need 2 cycles to have all documents on all servers. Why would you not want to do that? 2 potential reasons (one stronger, one weaker) come to mind: firstly since all spokes push documents to the hub at the same time the peak utilization of the indexer on the hub goes up and might slow down other functions temporarily (time critical agents?); secondly replication is a bit "messier" the spokes might get some of the updated documents already in the first cycle instead of the second, if your application relies on sets of documents (Lotus Workflow comes to my mind) you end up with some documents "missing" until the completion of the second cycle. Other than that Spoke Hub replication is your method of choice.
As usual YMMV.


How does the ideal Notes/Domino 8.5 upgrade looks like

QuickImage Category
Lotus Notes and Domino upgrades are comparable easy. The only challenge: poorly implemented Domino server will stay poorly implemented even after an upgrade. Unless of course you fix things. Once you know what you want to achieve there are many ways to get there. You simply could use smart upgrade to push out the clients and just install the new server software. What you end up with is your "old" Notes/Domino installation running with new code. That doesn't really cut it. It is like upgrading your trusted old Volkswagen two seater to a Mercedes S class (5 seater) but refuse to take more than one passenger, since that was your limit so far (or for that sake: drive faster than 150 km/h since that was the max the VW did). So how can you identify the ideal upgrade? Some of the following stuff should already run in your Domain, so it wouldn't require any effort other than a nod, but there are Domino installations that run despite their poor configuration (Domino is very forgiving):
Update: I have color coded items a well managed R6/7 Domain would have already before your upgrade.
  1. The Domino Server
    • You have defragmented your storage. You can use OS tools, an OpenNTF project or a commercially supported tool. Do that before and after the installation. You could consider different file systems used by other more stable operating systems.
    • You designed a high performance Domino server
    • You have implemented Domino Domain Monitoring (DDM) to run your server in "auto-pilot" (Don't forget the activity trends)
    • The Domino Configuration Tuner (DCT) has given you a clean bill of health
    • You have implemented the Certificate Authority. You use OU to divide your user base (typically by functions/departments/locations)
    • You have implemented the ID-Vault
    • You have cleaned your group structure to be machine verifiable and allows for next level automation
    • Users can create their own databases based on approved templates
    • All notes.ini variables you changed are in the server configuration document
    • Your SMTP routing is fully configured for Spam defense
    • The Domino server is clustered for high availability
    • Your network ports have encryption and compression enabled
    • DAOS is up and running (make sure you get your backups right)
    • Out-Of-Office is handled by the service, not the agents
    • The room and resources database is deployed
    • All servers in the same Notes Named Network can see each other on the network and each NIC interface on the server has its own NNN
    • Adminp runs flawlessly ( make sure your system databases replicate properly)
    • Traveler is up and running (buy yourself an iPhone as excuse)
    • All databases on your server have both the latest design and the latest ODS (currently ODS51)
    • You have designed and implemented Domino policies for all aspects of Notes/Domino usage (that might very well a project in its own right)
    • On your Domino server you have space for all user profiles since you have implemented roaming user profiles
    • You have installed and configured your Sametime chat entitlement
    • You have installed and configured your Quickr entry entitlement
    • There is a widget catalog with useful widgets for your organization
    • Smart Upgrade is configured
    • Your routing and replication structure is Spoke-Hub, not Hub-Spoke (that's a topic for another post)
  2. The Notes Client
    • You have defragmented your PC's harddisk. Do that before and after the installation. You could consider different file systems used by other more cherished operating systems.
    • Your machines have current drivers (Video is important) and patch levels
    • The Notes client installation has been moved to the Program Files for the application and the User Files for the user's data files (in Notes lingo we call that shared user install), so users logging into a workstation get their Notes install and not another.
    • You have implemented roaming users (one way or the other)
    • Policies configure and (if your users want to be dumb carefree) lock down your Notes clients
    • Shared login is implemented
    • The only version deployed is the full client, not the basic one. For weaker machines you use the notes.ini setting (deployed by a policy) to start the "classic" client
    • User know what the Notebook (formerly Journal) is for and use it
    • You deployed (optional) the MyAttachment tool
    • You have useful widgets configured
    • Firefox is your default browser
    • You have Windows, Linux and Mac clients
    • Smart Upgrade is configured
    • Users actually got upgrade training. (consider this)
  3. Your applications
    • You have identified performance gaps (e.g. too many lookups) and a plan to address them
    • The application's views are facelifted using the Java views
    • Your applications start making use of composite applications
    • There is a plan in place to recast your applications using XPages
    • You implement useful helpers in the sidebar
    • My personal favorite: You use eProductivity
    1. Important: The list above isn't the steps you need to do, but the results you should strive for. And remember: A lot of these should be in place already. If they are not, now is the time to clean-up. Don't settle for less. As usual: YMMV. Please comment what to add on.


Storing documents by lifecyle phases

QuickImage Category
In a business partner meeting (always an excellent source for blog ideas) in Beijing an interesting question popped up: "How should I split a (Domino) database if it gets really huge". The usual approaches are to split them along location or department lines. However in the discussion an interesting alternative approach emerged:
Database size vs. Update frequency
Document go through 4 phases:
  1. Draft: the document gets created but is not complete yet. Very often that state is neglected by Stalinist validation code.
  2. Current: The document runs through a workflow and people actively work with it, changing it often (e.g. add approvals)
  3. Historic: The document reached its final form and does not change any more. The document is still relevant and is looked at (often)
  4. Old: Nobody cares for the document any more, but you want to keep it for compliance or forensic reasons
From 1-4 the frequency of updates is declining while the number of documents is increasing. So splitting these documents into 3-4 different databases can make sense. In a XPage (or Dojo) application you can unify the display of these data easily. Food for your thoughts.


Notes pattern : Inversion of logging

QuickImage Category
A typical requirement in applications is to keep a log of changes. The simplest form is to keep fields with the author and date/time of the latest change (which has the distinct disadvantage not knowing what has been changed by whom). The next level would be to keep the list of all authors and change times (still leaving the question "what unanswered) followed by very sophisticated tools like Audit Manager or similar home grown solutions.
All these solutions have in common, that they "merely" record changes made (typically in a query or post save event) after they happened (From a big picture perspective, a log on query-save is technically before, but after the trigger that will commit the change). While this works reasonable well for audit, adding another typical requirement calls for a different solution. More often than not the parts of a (workflow) document which can be altered by any given user change in the course of an [workflow] application. While Author fields protect a document at large, safeguarding individual fields becomes more tedious, especially when you can't group them in access controlled sections (e.g. in a web application).
In a related use case: documents could be updated by users, who have current information, but don't "own" the documents. Typically they send an email to the document owner (if that is visible) asking for the update of that information. Asking somebody to update data into some other system, that involves copy & paste from an eMail *is* a sign of a broken process. The solution to that seemingly contrary requirements: "update of certain fields" only and "update by anybody" can be solved using a pattern that I christianed "Inversion of Logging":
Instead of logging what has changed an application using this pattern creates a change request, stating what will be changed. This request is then processed by a central system and checked against system constraints. If an authorized user requests a change, the changes are applied without further user interaction. The change request is kept as record. If an unauthorized user requests a change a workflow is kicked of to determine if the request is suitable for approval to be then routed to the data owner (or data guardian) for approval. Once approved the changes are applied to the main document.
Flow of Inversion of Logging
To make this internal flow transparent and pleasant to the users a few considerations need to be made:


Fixing a Notes database - Addressing Fragementation and a Linux edition

QuickImage Category
One popular entry in this blog is "Fixing a Notes database" Which provides a small MS-Windows command file to clean-up a notes database when your server or client isn't running. While it works very well, it has two caveats: For one on a MS-Windows system you might encounter severe fragmentation on your drive (thank NTFS for that), two it is MS-Windows only.
For the first problem you have three possible actions short of moving to an OS with less fragmentation problems for large files:
  1. use Windows build in Defrag
  2. use DominoDefrag on OpenNTF
  3. use Defrag.nsf from our Australian Business Partner Preemptive Consulting
I haven't used the later two (since they were not available when I left the MS-Windows camp). Both apparently use the Windows defrag API while defrag.nsf also does some clever things around size and index allocation (so they claim). I'll report on my experiences soon.

So where does that leave a Linux user? There is a lot of discussion around the need, or the lack thereof for Linux defragmentation, so I leave that to the individual reader. It isn't as easy on Linux, since you have a choice of file systems: EXT3/4, Raiser, JFS, XFS, ZFS etc. (I run JFS). However having that little script mentioned in the post above comes in handy, so here we go:
Notes Edition
/opt/ibm/lotus/notes/fixup -L -F -Y $1
/opt/ibm/lotus/notes/compact -D -c -i  -n -v -ZU $1
/opt/ibm/lotus/notes/updall -R $1
/opt/ibm/lotus/notes/updall -X $1
Domino Edition
/opt/ibm/domino/bin/fixup -L -F -Y $1
/opt/ibm/domino/bin/compact -D -c -i  -n -v -ZU $1
/opt/ibm/domino/bin/updall -R $1
/opt/ibm/domino/bin/updall -X $1
You might need to adjust the path to the executables if you have installed them elsewhere. Create a file in /usr/local/bin with the above content and make it executable (you don't need a file extension):
gksu gedit /usr/local/bin/fixnotesdb
sudo chmod +x /usr/local/bin/fixnotesdb

Make sure that your notes.ini contains CREATE_R85_DATABASES=1 or CREATE_R9_DATABASES=1
As usual: YMMV.
Update: There is a Mac version too


Domino Servers and Anti-Virus

QuickImage Category  
You have build your High performance Domino Server but it doesn't perform. Besides running a proper OS, you need to ensure the right anti-virus configuration. All big Anti-virus vendors have plug-ins that intercept documents before they saved into a database and checked for integrity. So you are save (as far as you can say that). However when using the scanners default configuration the NSF are scanned again when you want to access them. And this is a bad idea. Pointless and an I/O killer. So go check: Does the Virus On-Access-Scanner leave your Notes databases alone (*.NSF/*.NTF in the notes data directory)? Excluding the various tmp locations and the transaction log is also a good idea. Works wonders on performance.


Building a high performance Domino Server

QuickImage Category   
Domino can take huge user populations. To do this successfully all elements of a Domino server have to be considered carefully. Following the old insight "It is always the cable" you need to pay attention to the hardware layout. While you perfectly well can install a Domino server on a low-end laptop or a VM Image, it wouldn't give you the peak performance you are looking for. You rather want something looking like this:
Server layout for a high performance Domino server
Let us look at the details:
  • Disk layout
    • Operating system and Applications: This is your first RAID 1 Array. Since data hardly change and are really not that much a small but fast spinning drive will do. RAID1 protects you against failure of one drive and speeds read operations. Some suggest to have separate drives for application and OS, but that might be overkill. You could consider having separate partitions (easy on Linux/Unix).
    • View Rebuild Directory: There is a nice notes.ini variable View_Rebuild_Dir. You can point to a separate drive to store the temporary files created during index updates. The default is the system temp directory. This directory is a good candidate for a RAM disk or a solid state disk when your system is updating a lot of views all the time.
    • Domino Data: Typically you have a RAID5/RAID10 storage here to accommodate the large amount of data (users demand Google size mailboxes and your applications don't shrink magically). More and more we do see SAN systems for Domino storage, which is OK. Just keep in mind: Don't store Domino cluster databases from different clusters in the same SAN since it defeats the idea of a share-nothing cluster. While we support the use of NAS, the network latency and bandwidth is a limiting factor. Archival servers run fine with NAS, but not your high performance primary production server.
      Update: Fixed the graphic to show RAID10 since is shows much better performance than RAID5
    • Transaction Logging: You have tried it. Switched it on, expected great things and it didn't perform. The flaw: for good transaction logging performance you need your own disk. Not just another partition, but your very own spindle (RAID1) ideally with its own controller. It would be interesting to see how solid state disks work here.
    • Full Text Index: Since Domino 8.5.3 you can move the FTIndex to a different drive. This improves data throughput and reduces fragmentation on your data drive. Add FTBasePath=d:\full_text to the notes.ini and run updall -f. Your 100 user server won't notice. Large environments will benefit
  • Network layout
    • Cluster Replication (If you cluster your server only): You want to have your cluster on its own network segment. If you have 2 boxes next to each other a cross-over cable would do (afaik 1GB Ethernet requires a hub). If your go three-way (highly recommended), then a hub and an IP address segment that doesn't get routed will do.
    • Server Network: All servers should be connected on the server backbone. Put them into their own subnet clients can't see. Replication never gets disrupted by clients jamming the network ports. The server network also handles mail routing.
    • Client access: If you have huge numbers of clients you might reach the physical capability of your network card or the TCP/IP stack. Use more than one card and/or more than one IP address to have sufficient ports available for clients to connect.
Of course all of this isn't new (except the shiny picture), you can read much more details on IBM's Domino Performance Best Practices pages. This is just about the hardware layout. You need to consider the operating system too. But that's a story for another time. As usual YMMV.
Update: There is now addtional material available how to tune an IBM System x server to peak performance. Update 2: Samir points to a nice comparison between RAID5 and RAID10. It's not Domino related but insightful. One key point there: watch your controller.
Update 3: Added the separate drive for the full-text index


NoReader and NoAuthor Fields

QuickImage Category
Access control in Lotus Notes and Domino is build around the concept of positive identification, meaning you specify who can read or edit by naming them explicitly or implicitely as members of a role or group. What you can't do is to say: Everybody except these people (or all members of group A but not when they are members of group B). We don't have PreventReader or PreventAuthor fields (which would come in handy from time to time). The only construct is the the -No Access- setting in the ACL which has the highest priority.
For web applications there is actually a way how you can implement a PreventReader form (must be web only). You have to be clear that is is NOT a watertight method and can be compromised given enough effort. However it is good enough for most requirements (It doesn't work in Notes clients). These are the steps:
  1. Create a Names field that will hold the entries that can't read an entry (call it PreventReader)
  2. Design all your views to only contain hidden columns
  3. Create $$ViewTemplates for [ViewNames] form with no embedded view, no $$ViewBody fields, but a Body RichText field and a SaveOptions field, Computed, Formula "0"
  4. Create a webqueryopen agent (with a little creativity you can get away with one agent and one form) that prints the columns you want to display into the BODY field. The logic that prints the lines needs to be extended to skip printing when the @UsersNamesList contains a value from the PreventReader field. Since the view columns are otherwise hidden even ?ReadViewEntries wouldn't reveal a thing.
However a user could open a direct link to the document. So there are a few additional steps required:
  1. Create one subform per form. Call them sf[OriginalFormName]. Cut & Paste the whole content short of the PreventReader field into the subform.
  2. Create one subform sfUnauthorizedAccessAttempt that has a nice message or throws the user back to the start-page and/or logs the attempt
  3. In the now almost empty main forms add a computed subform with the formula @if(@isNotMember(ReventReader;@UserNamesList;"sf"+form;"sfUnauthorizedAccessAttempt")
Now if a user gets a direct link (e.g. forwarded by an authorized user), the subform "sfUnauthorizedAccessAttempt" will load instead of the subform with the real data, so our excluded user has nothing to see.
As usual: YMMV.


Reader Fields - extreme Edition (update)

QuickImage Category
Reader fields are "haunting" me. Recently I was asked to provide a solution for a requirement to add a very large number of entries to a reader field. A number that firmly exceeds the storage capability of a reader field (still 32k after all these years). My first instinct was to look for alternatives like using a few group names. However the alleged business case is the exclusion of a few people from a large group (somehow: the whole department can read a document except the person who has birthday since it is a party-planning application). This was one of the cases where I was wishing for a new field-type: NoReaderField (and for that sake NoAuthorField).
Well there is always the next version. Until then I created a small class, that allows you to add, remove and query names in a "virtual" reader field. Instead of reading and writing into a Notes item the developers call a method in a class (addEntry(newName as String) or addEntries(newNames as Variant) (an Array)) to interact with the reader field. The "saveField" method then distributes the values onto one or more reader fields. I don't made it too scientific and counted bytes, but rather count number of names. Splitting the reader names across multipe fields requires adjustments to your views. Something like Reader1:Reader2:Reader3:....:Readerx. We haven't tested the impact of that on performance. Once I have data on that I'll provide an update.

Update: Turns out the class was a bad idea. There is a hard stop for the total size of all combined Reader fields in a document. So the alternative is to use an approach similar to the NoReader field. I consulted with the master and discuss alternatives.

Have a look at the code yourself:
Option Public Option Declare '/** ' * ' * FieldAccess is a universal class to manage large amount of values put into text fields ' * (text, names, readers, authors etc. ' * it manages these large amount by splitting reader names into multiple fields to overcome ' * the 32k size limit. New fields are created as needed. ' * Good for 62500 entries ' * ' * v0.1 2009-03-20 : ' **/ '// Important: This class doesn't contain production quality error handling, you need to add '// that for production use. Use OpenLog for best results! Public Class FieldAccess Private AccessFieldName As String 'The base name for the Reader field Private maxFieldEntries As Integer 'The maximum number of entries per reader field Private maxNumOfFields As Integer 'Limit the number of fields, to avoid mising data in views Private maxTotal As Integer 'The maximum number of names in total Private curTotal As Integer 'The current number of entries Private curFieldMembers List As String 'The list of current Reader entries Private fieldCount As Integer 'The current number of fields Private initialized As Boolean 'Has the list of names been initialized Private doc As NotesDocument 'The current document Private fieldType As Integer 'Text, names, reader, author Sub new(curDoc As NotesDocument) Set Me.doc = curDoc Me.AccessFieldName = "Field" Me.maxFieldEntries = 500 Me.maxNumOfFields = 10 'Adjust as needed Me.initialized = False Me.fieldType = 0 'No special type End Sub 'Need to set the field Name Public Property Set fieldName As String If Me.AccessFieldName <> fieldName Then Me.AccessFieldName = fieldName Me.initialized = False End If End Property Public Property Get FieldName As String fieldName = Me.AccessFieldName End Property 'How many entries are in the names field right now Public Property Get count As Integer If Not Me.initialized Then Me.initializeFields End If count = Me.curTotal End Property 'Add a entry to the value list Public Sub addEntry(entryToAdd As String) If Not Me.initialized Then Me.initializeFields End If If Not Iselement(Me.curFieldMembers(entryToAdd)) Then If Me.curTotal = Me.maxFieldEntries * Me.maxNumOfFields Then Error 8000 'We throw an error Else Me.curFieldMembers(entryToAdd) = entryToAdd Me.curTotal = Me.curTotal + 1 End If End If End Sub 'Bulk adding of entries. Suitable to pull in entries from getItemValues


Lotus Domino and RFC

QuickImage Category
In a recent request by a government customer we were asked about the compliance of Domino with various RFC. I've listed them here for reference:
RFC 1123Requirements for Internet Hosts Application and Support (STD 3)Yes
RFC 1870SMTP Service Extension for Message Size Declaration (obsoletes: RFC 1653)Yes
RFC 2476Message Submission Obsoletesee RFC 4409 below.
RFC 2505AntiSpam Recommendations for SMTP MTAs (BCP 30)Yes
RFC 5321The Simple Mail Transfer Protocol (obsoletes RFC 821 aka STD 10, RFC 974, and RFC 1869, RFC 2821)Yes
RFC 5322Internet Message Format (obsoletes RFC 822 aka STD 11, and RFC 822)Yes
RFC 2920SMTP Service Extension for Command Pipelining (STD 60)Yes
RFC 3030SMTP Service Extensions for Transmission of Large and Binary MIME MessagesNo
RFC 3207SMTP Service Extension for Secure SMTP over Transport Layer Security (obsoletes RFC 2487)Yes
RFC 3461SMTP Service Extension for Delivery Status Notifications (obsoletes RFC 1891)Yes
RFC 3462The Multipart/Report Content Type for the Reporting of Mail System Administrative Messages (obsoletes RFC 1892)Yes
RFC 3463Enhanced Status Codes for SMTP (obsoletes RFC 1893 )Yes
RFC 3464An Extensible Message Format for Delivery Status Notifications (obsoletes RFC 1894)Yes
RFC 3834Recommendations for Automatic Responses to Electronic MailYes
RFC 4409Message Submission for Mail (obsoletes RFC 2476)PartialDomino can be configured to act as an MSA, but is not strictly compliant (e.g. does not qualify unqualified domains in addresses)
RFC 4952Overview and Framework for Internationalized EmailThis is not a standard - only an informational document
RFC 4954SMTP Service Extension for Authentication (obsoletes RFC 2554)Yes
RFC 5068Email Submission Operations: Access and Accountability Requirements (BCP 134)YesThis is not a standard - it is a document describing a set of operational best practices. Domino can be configured to support these practices


DXLMagic - using the DesignExtractor

QuickImage Category   
I'll show you to take full advantage of the DXLMagic Tools one post at a time. You have exported your database application using the DesignExporter. Now you can run reports, transform the XML, edit it etc. When you are planning to work on individual element using the full DXL is a bit cumbersome, especially when you plan to write back just some parts of it later on. To split your file into multiple files you use the DesignExtractor. It will extract parts of your DXL based on an XPath expression. If XPath is still a mystery for you, this would be the time for a tutorial, Jeni's Introduction or Michael's Reference. Since you typically want to extract more than one sort of element, DesignExtractor uses a command file with instructions.
The syntax for DesignExtractor is:
java [PathToDXL] [ResultPath] [CommandFile]
In the latest version of the DXLMagic.jar I have updated the path parameter, so you now can use "." for the current directory instead of specifying absolute path names. [PathToDXL] must point to a DXL File. I currently don't process all DXL files in one directory (Maybe I should think about such an option), so you have to point it to one DXL file. [ResultPath] is where your individual files go. You will see in a second, that you can specify subdirectories for individual results. The interesting part is the command file. The command file executes one extraction per line (empty lines or line with # at the beginning are ignored) and needs 3 parameters:
  1. File Prefix: The extractor tries to name the file using a name or alias found in the result from the XPath expression. If it can't find something it uses a running number. It appends this to the file prefix. If your file prefix contains a directory separator it will write that file into that subdirectory (and create it if needed). So the full name for any file is composed out of [ResultPath]+[File Prefix]+[Name that the Extractor figured out]. A typical value for File Prefix would be "forms/form"
  2. Export Mode: The extractor can write the result of the XPath expression into one summary file or into one file each for every result node. If you want one summary file you use "summary", for individual files use "single".
  3. XPath expression: The heart of the extractor. An XPath expression is evaluated and returns a note-set. You can do whatever you fancy including using the extractor against other XML files (you might struggle with the name spaces a little bit then). To make your live easier with the XPath, the DXL namespace is abbreviated "d:". Typically you would extract high level elements like forms, views, libraries, agents etc.. However you are not limited to that. You could e.g. extract all fields to feed them into a cross-reference system. Getting started is easy: Extracting all views is this expression "/d:/database/d:view".
The command file you typically will want to use looks like this:
Happy extracting. I'll continue that series with some of the stylesheets we have written. Stylesheets are not only good for reports, but also to whip your code into shape. Stay tuned.


Enemy of the state IT system #1 : complexity

QuickImage Category
As the saying goes the complexity of any system grows at the square of the components involved. We do live in a complex world, so you can shrug, surrender to the unavoidable and move on. Or you can take a closer look and discover differences in complexity. There is the complexity from the nature of the task at hand and there is the complexity we add in the way we approach the completion of the task. Order a meal in a restaurant can be straight forward or incredibly complex. You can call that "Essential Complexity" and "Accidental Complexity", terms coined by Fred Brooks over 20 years ago. His book The Mythical Man-Month is a must read for any software engineer or normal mortal human dealing with software engineers. Brooks states that software isn't werewolfs, so there is no silver bullet. While I fully agree with his insights, I more and more doubt, that the second complexity is really "accidental". It is man made and IMHO quite deliberate. Of course we never need to assume malice where incompetence is sufficient to explain*. Let us look at the setup of your average web application (and I skipped the provisioning and monitoring pieces, as well as the diverse interfaces to administrate all this):
Too many moving parts
After careful counting that would make 6 servers or a complexity of 36. The setup is typical for Microsoft setups (Exchange, ActiveDir, Sharepoint, IIS, DotNet, SQL Server) as well as for Java setups (Apache HTTP, Tomcat/WAS, Samba, LDAP, DB2/MySQL, POP/IMAP) or your favourite scripting environment (just swap out Tomcat/WAS for the script of your choice). Not all of these servers need to be physical boxes or virtual images. They can be tasks running on one hardware. Make that high available (12 boxes) and you have a complexity of 144. Of course you could use Domino for your applications. The setup will look like this:
Of course to get there you have to challenge your believes:
  • Complexity is good
  • Web systems need to be 3 tier
  • A directory must be dedicated LDAP (Domino does LDAP, as does AD)
  • Databases must be relational
  • Only complex setups scale
  • Only with [insert-your-favourite-language-here] web applications can be developed properly
  • And then the Lotus Domino specific ones:
    • Domino is legacy
    • Domino doesn't scale
    • NSF is not reliable
    • Web development with Domino is complicated
* On the other hand: Any sufficiently advanced incompetence is indistinguishable from malice.


Building extensible applications

QuickImage Category
One challenge that all Notes developers, including the IBM template team, face is the customer's need or desire to customize given applications. Warren started to ask if that is a good thing. With a little planning you can design your applications so they can be easily customized without creating an upgrade nightmare (it still might be a bad dream). A very interesting example for such a "upgrade-save" extension is actually the Domino Directory (a.k.a the Name and Address book). It contains for people, groups etc. subforms that are empty. IBM will *never* touch these subforms. They are designed for schema extensions. So when you change these forms to contain additional fields, code or actions a system upgrade never will overwrite them (you need to follow some instructions to be save). In your own applications you would have these additional subforms so your customers can add functionality as needed (OK it doesn't help when you want to change the main forms).
When you want to make your behavior extensible you need to take a few extra steps:
  • Move your event code from the forms and the views into a custom class.
  • Ideally that custom class would inherit from a base class, that determines general behavior (e.g. validation based on a system configuration)
  • Buttons, Links or Field Events can call methods of the class
  • You might have the class, which obviously needs front-end methods (otherwise the On Event doesn't work) contain a class that does back-end only, so you can have the same function in your web application too.
  • Now create a second/third class that inherits from the class that does all the work but leave it empty. Never touch that one, it is for your customers. That class would live in its own script library and can be overwritten by customizers without impacting your general design. Remember Interfaces live forever
  • Link that class using "On event" to the form or view so the form/view events are handed down the class hierarchy until executed, see the diagram below. And no: it is not time expensive.


XPages and Validation

QuickImage Category  
In our XPages workshop we have a number of exercises that deal with validation. XPages allows you 4 different approaches when and how to validate (not counting the on-field-exit validation which is a very bad habit anyway - never validate on exit, just hint, that this will not submit):
When you look at applications you typically find validation code in the submit button or attached to the submit event (that would be the second from left). While that is wide spread it is not the best way to do that in XPages. Looking at the possibilities we have 2 discussions: server vs. client, validator vs. Button/Event code. You have to be clear about the execution sequence: when a validator fails the submission code isn't executed. If the client validation (by any means) fails no server validation takes place.
  • Client side validation has the clear advantage to give faster feedback, since no round-trip to the server is required. However it only can be used as a measure for user convenience since it can be canceled out using the appropriate tools (Firebug anyone) easily. Client side validation needs all information for the validation downloaded into the client. So you are limited to simple validations like required fields, consistency checks, data types etc. Anything that requires server information (e.g. Is there budget left, is this the right approver) better lives on the server side. Also currently the client side validation in XPages relies on a simple alert() to notify of validation failures and users are fast as lightning to click away prompts without reading them.
  • Server side validation requires a round trip to the server (full page or a ajax call), so it will take longer. On the server side you can rely on any resources or lookups to protect data integrity. Also server side validation is less prone to manipulation (you would need access to the server sources). In XPages you also have better UI capabilities.
In conclusion: Client side is for convenience, server side for the "real stuff".
  • Validation in code (a button, the submit event etc.) is a typical way validation is done. Being the prevalent way doesn't make it right <g>. You need to roll your own notification mechanism (like updating a label) and tend to tie your validation into the UI. Also when you remove a field the validation routine is likely to break. Last not least: you have a hard time documenting what gets validated and why. (You see where I'm going with that)
  • Validators are defined together with a field and open a series of posibilities. XPages offers 9 different validators.

    You can write JavaScript, regular expressions, check for data types or roll your very own. All you can do in a button/event code you can do in a validator. Since the validators themselves don't interact with the UI the designer can decide how to surface the messages without changes to the validation code. When you remove a field all its validation code goes with it, so maintenance gets much easier. Last not least: you can run an XSLT report against your XPages source and render a report that shows a field with all the defined validators, which makes documentation easier.
  • Form Validation are the @Formulas defined in your classic Notes form. They only fire when you have specified "Run form validation" as "On Save" or "Both". Typically you would use those when upgrading existing applications.
In conclusion: Validators are the way to go. Read more about it in the Domino Designer WIKI on IBM DeveloperWorks. One important aspect to consider when you start and mix the validation methods is the execution sequence: From left to right: the first validation that fails will stop the process. So if (2) fails (3) - (5) never get executed!


Reader fields (again)

QuickImage Category
Reader Fields and how to handle them to balance security and performance is a never ending topic. Let us have a a closer look what actually happens. Let us assume we have a view with 500,000 document, where a particular user has access to 77 documents (which is not so uncommon in big organizations). In the first case the view is sorted by some criteria (case number, date or whatever) that scatters the 70 documents all over the database.
When the user e.g. opens that view from the web requesting the default 25 entries, the Domino server actually will more than 350,000 documents. It needs to instantiate a document object and evaluate access control rules for it. These are costly (read time and memory) operations. Performance will not meet any user expectation. Ironically this is hardly discovered by developers since they test with a few hundred documents and very often have universal access. With a little change of the view layout, the situation changes completely. We take the same view but categorize it by e.g. by @Unique(DocAuthors:DocReaders) which would list all documents for a specific reader (remember: an Author field includes Read Access Rights).
Now the Domino Server does an index search which is very fast going after a build index and reads exactly 25 documents. So with a little change in the view layout we removed 99.9929% of the document reads (bad news for hardware sellers). Looks good.... but you will say: wait a second. One user can have read access because her name is in a Reader/Author field, she might be member of a group or have a specific role, the single category only will show me one of that entries at a time.


Investigate notes:// links.

QuickImage Category
You are sending out emails that contain Notes links to other email systems. The router converts them to notes:// links (and with a little help even into good looking HTML mails). However when you click the document in your target database doesn't open, even if you have a Notes client installed. Now you need to investigate what is happening. These are the steps:
  • Open your windows registry using regedit
  • Look for HKEY_Classes_Root\Notes, which is the protocol handling definition for Lotus Notes. It should look like this:
    Windows Registry Editor Version 5.00
    @="URL:Notes Protocol"
    "URL Protocol"=""
    @="\"C:\\Notes\\notes.exe\" -defini \"%1\""
    Note, that your path to the Notes location might be different.
  • With this fixed Notes will pickup Notes URLs and display them correctly. However there are more possible caveats. One could be the Notes URL in the email, the other the way the email application is calling the Windows function to launch Notes
  • The syntax for a Notes URL is notes://servername/database/view/documentuniqueid (For a full description see the online help). If you omit the documentuniqueid Notes will open the view (the view can be specified by name or by unid), when you omit documentuniqueid and view Notes will open the database.
  • When you create a link from a local database the server name is missing, so the Notes URL has a triple slash notes:///database/.... The Notes client will look for it locally. If the database is not local (or on the desktop) it will throw an error. With a little help you can force server names even if the database was created locally.
  • Sometimes the mail application isn't calling the notes url properly. This is a little harder to investigate. You need to edit your registry to point the URL protocol to something else than notes.exe, so you see what is exactly handed over at the command line (You backup your registry, isn't it?). I've written a small Java class that can do that for you. So instead of @="\"C:\\Notes\\notes.exe\" -defini \"%1\"", you would write: @="\"java.exe\" C:\\temp\\CommandlineSpy.class \"%1\"". The little class will spit out a prompt with the URL handed over to the OS.


Domino Configuration Tuner

QuickImage Category
The Domino Server has hundreds of configuration parameter and you need to be a real expert to master them all. For less smart people like the rest of us, there is the Domino Configuration Tuner (DCT). DCT can analyze R7 and R8 servers:Domino Configuration Tuner is available to any customer free of charge. It works with Lotus Domino server 7.0 and later, and runs on the Lotus Notes client, standard or basic, version 8 and later. There are no required changes to a customer domain configuration in order to take advantage of DCT evaluation.
R6.x servers are not supported, but you can run DCT and see what happens <g>
To sum it up:
  1. DCT is free of charge
  2. No changes are required in your Domino configuration
  3. You need one R8 client to run the tool
  4. You could even install a Notes client on a stick to run DCT (see this)
  5. It does analyze R7 and later servers
  6. There is a lot of information already available
  7. The Domino wiki provides a detailed description
  8. Harry Peebles posted a slidedeck on slideshare about DCT
  9. You can watch a video tour about DCT
  10. DCT and Domino Domain Monitoring (DDM) complement each other. So while you are on it implement both
  11. DCT is available for download free of charge from the IBM Website
  12. DCT runs fast, depending on your network configuration you spend just a few minutes per server
What's your excuse not to run it?


Workflow and Document Access

QuickImage Category
An interesting question arose in a discussion with a Korean customer this week. In that discussion I learned, that workflows are subject to a lot of cultural influences. Cultural influences come from the region as well as from the corporate culture. A typical requirement in workflow applications in AP: A user must not see other workflow documents (vs. the more European: doesn't want to/doesn't need to). A "must not" requirement mandates the use of reader fields (with all its performance ramifications) while a "doesn't want to/doesn't need to" requirement can be fulfilled with clever navigation (more on this in a later article). For good performance it is smart to use clever navigation in both cases.
When documents are protected by Reader Names, you typically find 3 types of entries in these reader fields: Full Names, Group Names and Roles. A Full Name (e.g. "Roger Rabit/Toons/Warner Inc" mostly stems from the users directly involved in the workflow like the requestor and the approvers, while groups and roles typically denote departments and/or business functions like warner.apps.controlling or [Audit] (I like structured group names). When a user moves on to a new department or role (e.g. from engineering to collection) (s)he would not be member of the respective groups and only will see workflow documents with direct involvement (The full name in a reader field). Depending on corporate culture that might or might not be acceptable. So what are the options to remove access for the user to these documents? What needs to be done is to make the entry in the reader field look different from the full name of that user. These options come to mind:
  1. The organisation is using OU to specify the department of the users. The adminp/CA process is used to change the OU of the user moving to a new role. In this case the workflow database needs to be set not to touch names, reader or author fields. This is an advanced property in the ACL. The user name changes, the entry in the workflow database doesn't -< read access is removed.
    Caveat: Of course NO name change would be reflected in the database thereafter, so after changing your family name in marriage your workflow documents would "disappear"
  2. The organisation is not using OU or the adminp for other reasons *must* update the workflow database. In this case the content of the Reader field needs to be changed. Ideally the change of role would be governed by a capable tool, so starting of the update agent can be automated. The agent would go through the workflow database and change the name of the user by prefix or appending an entry (like an OU): Roger Rabit/Toons/Warner Inc -> Roger Rabit/RoleChange/Toons/Warner Inc.
    Caveat: You need to be very clear with the auditors about such an agent. Altering a workflow document might violate corporate governance standards. It is therefore a good idea to separate the fields storing the business logic from the "access mechanic".... Which leads to the next option.
  3. You need to change the application slightly. The field that computes the read access based on business logic would actually be a names field. You have another names field "DocReadersRemoved" that is computed when composed as "". The DocReaders fields would be computed and have the formula @Trim(@Unique(@Replace(DocReadersCandidate;DocReadersRemoved;""))). An update agent, similar to the previous case would only update the DocReadersRemoved field and refresh the DocReaders field. This way no business data needs to be manipulated. If that process is properly documented it can be considered "Audit save".
In any case I highly recommend to limit the number of reader and author fields per document to exactly one each: DocReaders, DocAuthors (you could use DXLMagic to do the changes). These fields most likely would be computed and pull in their values from Names fields that contain the business logic. One good tip: Always include a special role in the Author field (e.g. [Joshua] if you know). That role doesn't need to exist in the ACL, actually it shouldn't during normal operation. If the database needs to be investigated for whatever reason you add the role to the ACL and the investigator.


Generating a dynamic questionaire

QuickImage Category
A developer recently asked me how to generate dynamic questions. Dynamic as in: there is a general topic and then details. E.g.
Please state the number of building blocks per shape
[Button: add row]
The filled in cells would have entry fields (not just text). So how would one do that in Domino 7 or 8 (so no XPages)? It is actually not that hard. Use a form "ConfigureQuestion" to capture all the parameters. I would have a text-area for the question (call it question) and one for the column headers (call it qdimensions). The nice trick: use a hidden regular field and construct the text-area with passthu html. The qdimensions field would be multi-value. Have a dropdown list (call it qType) with the data type. Use dijit names as alias. e.g. Text|dijit.form.TextBox. This way you will be able to generate properly validated stuff. Then have a field for width (qwidth), the first column text (qCol1) and question number (qnumber). Then have a hidden field that puts all together:
tmpCount := @Elements(qdimensions)+1;
tmpNum := "0":"1":"2":"3":"4":"5":"6":"7":"8":"9";
colCount := @Subset((tmpNum*+tmpNum);tmpCount);
tmpStart := "<table id=\"qtable"+qnumber+"\"> <tr> <th colspan=\""+tmpCount+"\" >"+question+"</th> </tr><tr>";
tmpHeader := "<th>"+qdimensions+"</th>"<;br /> tmpLineStart := "</tr><tr> <td><input type=\'text\' id=\"row1col1\" /> </td>"
tmpQuestions := "<td><input type=\"text"\ dojotype=\""+qType+"\" title=\""+qdimentions+"\" id=\"row1col"+colCount+" /> </td>";
tmpLineEnd := "</tr>"
tmpEnd := "<tr> <th colspan=\""+tmpCount+"\" ><input type=\"button\" value=\"Add Row\" onClick=\"addRow('qtable"+qnumber+"')\" / ></th> </tr>"

The code is a rough outline, so cut and paste might not work and you also might need to work on the ids for the input fields. But you get the idea. The trick here: the multi-value field qDimensions creates as many th/td/input as we actually need. The questionnaire form would have an embedded view showing that hidden field as passthru html, so you get all questions rendered. A JavaScript function addRow(id) takes the ID of the table as parameter, so you know what table to use. Last stop: make the questions submit to an agent, so you can process all the fields.


Overheard at a developer discussion

QuickImage Category    
XPages feedback from an early adopter:"I'm now able to call our code from an XPage... and equivalent transactions that were taking a full second (or more) to complete in a Domino agent are now completing in 20 - 40 milliseconds. And we haven't even optimized it yet; that 50x performance boost is just from running it in XPages instead of in agents."
What do you want more?


Lotus Notes 8.5 requires the DCOM Server Process Launcher Service

QuickImage Category
I'm using Linux as my base operating system. Until Domino Designer is available on Linux I need to keep a Virtual Machine with XP around. I work with both VMWare and VirtualBox. One thing you automatically do with virtualized operating systems is to shut down what is not essential. In windows the services loaded by default can be cut back quite a bit. Of course you can overdo that. One lesson I learned the hard way: The Notes 8.5 client crashes fatally if the "DCOM Server Process Launcher" isn't running. Took me 2 days of wondering until Nathan gave me that tip. Apparently that service is needed on "bare-iron" windows too.


XAgents - Web Agents XPages style

QuickImage Category  
In Domino Agents are the Swiss Army Knife of development. They can be called from the client, triggered on schedule, automatically run on an event, be associated with a web action (Open, Close) or directly called from an URL using ?OpenAgent. The ?OpenAgent use of agents is subject of this blog entry. A agent triggered from an URL can use Print statements to output results back to the browser. When the first line of your print statements is "Content-Type:..." you also can serve XML or JSON or whatever exotic format your agent logic was able to conceive. Agents come with the usual drawback: the runtime environment is intitialized new for every agent run, so you need to update a document if you want to keep track (Replication conflicts anyone?)
XPages don't have a notion of agents (you can call them in the backend, but you won't get anything back from the print statements), so it seems a step back. On closer inspection however one can see, that XPages offer a greater flexibility for the same pattern of use. Every element on an XPages has a "render" attribute which determines if the element should be rendered. "Every" includes the page itself. So when you set the page's render property to false, well the XPage engine renders nothing. So you get an empty canvas you can paint on. XPages still provides you with the page event and access to scoped variables, so you can code better performing "agents" since you can keep lookups in memory. To get the equivalent to the LotusScript print you need the ResponseWriter from the externalContext. To set the content type you use the response.setContentType also to be found in the external Context. A sample snippet you could put in the afterRenderResponse event of your page would look like this (error handling omitted):
// The external context gives access to the servlet environment
var exCon = facesContext.getExternalContext(); 

// The writer is the closest you get to a PRINT statement
// If you need to output binary data, use the stream instead
var writer = facesContext.getResponseWriter();

// The servlet's response, check the J2EE documentation what you can do
var response = exCon.getResponse();

// In this example we want to deliver xml and make sure it doesn't get cached
response.setHeader("Cache-Control", "no-cache");

// Here all your output will be written
writer.write("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<christmas date\"24Dec\" />");

// We tell the writer we are through
// Update:On 8.5.2 or later writer.close() seems not be necessary/be rather harmful. Read the revisited entry
More information about the FacesContext can be found in the Domino Developer Wiki.

Bonus Tip: Since you do not render anything from the XPage you just used, why not take that empty page and write the documentation for that XAgent?
Update: Other posts on rendering your own output in XPages: Have fun writing XAgents!


Testing Notes Client applications

QuickImage Category   
My friend Lucius started his own software company Smart Toucan. His first product is Auto-User, a tool that can test Notes Classic client applications. You can use it for recording/replaying demos or to run automated software test. While IBM has a great product for web application testing (the Rational team will not talk to me anymore if I don't mention them here <g>), automated testing capabilities for the Notes classic client are painfully absent. Auto-User plugs this gap. The current Beta release works with Notes 7 and I'm looking forward to the R8 Classic and Basic configuration versions. Go check it out.


Estimating Notes projects

QuickImage Category    
Estimating software project cost and duration seems to have parallels with the local weather reports: You simply can't rely on them. Waterfall project approaches are partly to blame. Estimating the amount of work units for a requirement is another. How much work is "The system needs to be user friendly"? For the sake of this post lets us presume your have a magic box, typically labeled experience, either explicit or implicit, that gives you an insight how much code you need. Once you know how much code (a.k.a. LOC or "lines of code") you need to write, estimation becomes manageable. A good tool to estimate is COCOMOII. With the LOC, COCOMOII and your average hourly cost you can estimate the cost easily (Thanks Mitch).
However using a visual development environment (Visual Studio, Eclipse, Adobe Studio, Domino Designer) makes things a little more complicated. A form or a view you draw is not a line of code. The UML model you paint (that will turn into a database schema and code classed via generation) is not a line of code. So you need to expand your mental estimation model slightly. There are two new terms to get familiar with:
  • LOC_TE = "Lines of Code - Time Equivalent": The time for painting one form is equivalent to writing how many lines of code?
  • LOC_AL = "Lines of Code - Avoided Lines": When you paint one form how many lines of code can you avoid to write? The whole promise of any visual environment is, that LOC_AL > LOC_TE. This probably varies from platform to platform and makes an excellent subject for a bar brawl between competing developer camps. I guess you won't find much information about that ratio since nobody ourdays would afford to build a system twice.
Where does that leave you for Notes or Domino estimations? You need to find your magic number for LOC_TE for the various design elements (see graphic below) and then apply the COCOMOII estimation. Of course you could simply use time estimates (by revisiting your past projects), then however you need to calculate your own project variance based on project complexity and team experience. Neglecting experience and complexity are the great fallacies happily endulged in by project planners with too little experience in software project estimations. The difference for a similar sized project can be 500% from "very experienced team working on a low complexity project" to "unexperienced team working on a high complexity project". Even if you just consider experience you will still have to shell out more than double the money (230%), 70% more people and 40% more calendar time for an inexperienced team.
The orange boxes are most likely what you will get out of the discussion with the business users, including some of the yellow boxes. For the red ones you can directly insert lines of code. How do you estimate your projects?


Avoid Keyword Lookups in Read Mode

QuickImage Category
Keyword fields (known to the younger audience as: Radio buttons, checkboxes, combobox, dropdowns etc.) very often are populated by @Formulas. @DBColumn, @DBLookup and @GetProfileField are equally popluar. For best performance you should use @GetProfileField (if that is possible in your application context). Another way to speed things up: prevent the formulas from executing in Read Mode. If for example a formula to populate the options looks like this:
tmp := @DBLookup("Notes":"NoCache";"";"(LookupProjectsBySponsor)";@UserName;2);
@if(@IsError(tmp);"You don't sponsor projects"; tmp)
change it to
tmp := @DBLookup("Notes":"NoCache";"";"(LookupProjectsBySponsor)";@UserName;2);
@if(@IsError(tmp);"You don't sponsor projects"; tmp)
You can refine that and not populate keywords you don't intend to change (based on a status), you would update the first condition then.


Advanced DECS usage

QuickImage Category
Relational data models are a very popular abstraction used in IT. And a abstraction they are rather than a mapping. I haven't come across a relational table in real live, but only documents (the ones signed by a national bank president are my personal favorites). So while document databases, objects and attributes are a better fit to the real world, RDBMS are well understood, come with a powerful query language and are reasonably standardized. Naturally you will come across the requirement to connect Notes and Domino applications to a relational back-end. The options are plenty: ODBC (bad idea), JDBC, LCLX, DECS, LEI or DB2NSF (not mentioning the 3rd party tools). Typically I see the RDBMS connections entangled in code creating more of this. A better way to separate concerns is to remove RDBMS connections from your code and let the server handle that. In Domino you can use DECS (Domino Enterprise Connection Service) and LEI (Lotus Enterprise Integrator) for that. DECS comes with Notes since some R5 version, LEI is happily sold to you by your local IBM sales rep. I will focus on DECS for this post.
The typical DECS use is to define a data connection (tip use OLEDB not ODBC to connect to MS-SQL), a data-form mapping and import the primary keys. In the data form mapping (a.k.a Activity in DECS terminology) you set what events you want to monitor: create, read, update, delete. Typically works like a charm. Any data that is updated on the RDBMS is automatically pulled into the Notes form when opened. The biggest drawback: DECS doesn't monitor record creation or deletion on the RDBMS side (that is what besides other capabilities LEI is made for). So DECS seems to be confined to cases where creation/deletion is limited to a Domino side activity. Also DECS can't trigger stored procedures. With a little creativity however you can push the use of DECS far beyond that. I'll describe some use cases I came across where DECS was used to avoid mixing Domino and RDBMS code in a function or an agent:
  • Employee information is stored in a RDBMS as part of the HRMS. The employee ID is populated into the Domino Directory (yes it has a field for that). In an Notes application data is needed from the RDBMS. Instead of writing LCLSX code the application simply looks up the empPara document that is linked to the RDBMS using DECS. The document might not exist yet (if the user never had used that application before). If it does not exist it is created and populated just with the EmpID. When closed and reopened it will pull the employee information from the RDBMS. This is possible since the DECS task only monitors read/update. A scheduled agent removes documents when there is no more match in the Domino directory. In summary: if you can anticipate or know the primary key of a record you need, you can use DECS by not monitoring creation/deletion.
  • Based on a workflow stored procedures in a Oracle database need to be triggered. Here some work both on the RDBMS and the Domino side was done, clearly separating both. An auxiliary table was created in Oracle with an INSERT trigger, that would execute the stored procedure using parameters given in that table and write back the success of the operation. Initially it was planned to purge the table regularly (using a Domino agent deleting the documents) but then audit loved the additional documentation, so just archival was established. During the project there were a lot of changes on both ends, however the approach of using a trigger driven table proved to be very efficient to separate the two environments minimizing interference. E.g. one stored procedure would generate a unique identifier according to some obscure, constantly changing rule. By storing that result into the aux table and creating (after reading the document linked to that table) a corresponding Notes document (again create was not monitored in that scenario) application flow was seamless.
  • Enterprise parameter management [Updated] was created in a RDBMS application. Many Domino applications would need to use these parameters. Instead of having LotusScript code in every application doing RDBMS lookups DECS was used to populate and update a parameter NSF. In the activity the option "leave values in documents" was selected, so fast selections (like @DBColumn or NotesView.getColumnValues) would work. Since parameter changes happened rarely a copy of the populate keys agent from the DECSAdm database was created that would periodically shut down the activity for the parameter NSF (only the activity not the whole DECS server), pull the keys from the RDBMS based on the activity definition but only insert new keys (the original agent duplicates keys when you run it twice). Lastly it restarts the activities in DECS.
You of course can rightfully ask: why shouldn't I just use some LCLX code to connect. It isn't much more trouble. The answer is short: DECS allows separation of concern. Your Domino developers deal with what they know best: Domino. You retain the interface (and interfaces are the pieces that can make and brake an upgrade of any system) configurable outside of your own code. The whole UI for configuring and mapping has been tried and tested for years and you can open a PMR (the IBM lingo for: bug report) against it for IBM to deal with problems. You can't do that with your own code. DECS also deals with format translation and relives you from the temptation to write inefficient SQL, last not least DECS taking care of connections and connection pooling.


Supporting Notes Users in Bandwidth Challenged Environment

QuickImage Category
In a recent meeting with a client the question was raised: "How do I support users in bandwidth challenged environments". Bandwidth challenged as in GSM (no GPRS), Modem dialup, Satellite links and the like. My first instinctive answer was replication of course. Notes was around when 9600 Baud was considered fast and replication was working then. But after reflecting on the question for a while I had to answer: It depends. You have a number of options depending on your use case. In general there are two strategies to look into: a) let data transmission happen outside the user time (a.k.a in the background) b) minimize data transmission. These are the options:
  • Replication: This is the clear choice for email and informational databases like document libraries, discussions, team rooms etc. The clear advantage: while you do other things the background replication task makes sure all information reaches your desktop. It also has the clear advantage of data being available off-line. Replication is less suitable for applications where you actually only need a small subset of the whole data (typically in workflow applications) or where data transmission is very expensive. You can tweak replication settings to accommodate that. E.g. in the location "Expensive" you only receive your email (and other) database(s) while in the location Internet all is replicated (The sending of new messages is handled by the router, so it will go out). You also can limit the amount of information replicated. (See your admin help for details)
  • Mail routing: In workflow applications when requesting action or approval is is usual to just send an email with a link to the workflow document. For low bandwidth situation that could be changed to send a whole form that includes the action buttons. That form could be made part of the mail design (if it is generic enough) or could be send using "Store form in document". A decision maker would get the entire information in the inbox and can click the button (which would trigger a return mail to the mail-in enabled main application. The mail is stored there as documentation and the main document is updated.
  • Forms Bin: This is the "other" end of the Mail routing concept. A central database contains all the forms for all the workflow application used by bandwidth challenged users and the look-up configuration as far as possible. This database gets copied onto the workstation (either when they are in good network condition or via CD-ROM). Users fill out forms there, but the forms don't get stored in the forms bin but get emailed to a mail-in database that is the main application. You could add a non-replicating "personal bin" to keep local copies. This way only documents that are relevant to the user are transmitted. The forms bin replicates (probably receive ony), so updates to the forms, form removals or new forms are properly reflected.
  • Feed enablement: To get an overview on what is happening, what action is required pulling a summary through RSS into your favorite reader. While that is a read-only approach it might fit a lot of needs. Since Domino 702 there is a feed wizzard, that can generate feeds without touching your existing application. Of course you can take a peek into IBM's and OpenNTF's templates and have the RSS generated inside your application.
  • Sametime enablement: Add a Sametime BOT to your application, so users can use simple commands to retrieve or act on data there. While it is minimalist it is also frugal on the bandwidth. IBM has toolkits for Java and C++, while our business partners Botstation and Instant Tech provide libraries for LotusScript. Works great on mobile devices too.
  • MQ Enablement: This is a variation of the Forms Bin approach. Using the Expeditor Framework in the Notes8 client you can use MQ to send the data (with a little work - sample on request) it works on R6/R7 clients too. Advantage here: you application doesn't need to worry about on-line/off-line and the data transmitted is very small. Disadvantage: you need to get used to MQ (an obviously install it)
  • Web enablement: Since 4.x it has been possible to render Notes form in the browser. There is a large body of knowledge out there how to do that. Of course you want to be very light for challenged bandwidth, add compression or use XPages which does a lot of optimization for you (you want to use Firefox for its better handling of JavaScript caching)


Estimating efforts for web enablement

QuickImage Category
A lot of organizations I talk to have a lot of Notes Client applications that they want to make accessible through browsers. Some want browser only, some want dual access. All wonder how to estimate the effort needed properly. As a rule of thumb one can say: number of artifacts times time per artifact times experience of the development team. This would make one equation with three unknowns, which can't be solved. I won't discuss the "time per artifact" in this post, since this is very dependent on your technology, process and tooling used. But I will shed some light on the other two variables.
In my experience the factor skill is binary. For a guru, champion or master developer your factor is 1 (or even less), for a well experienced developer 2, for an experienced developer 4, for an entry level developer 8, for a novice 16. Of course even a novice can contribute if (s)he applies and polishes what the more experienced developers produce.
To determine the number of artifacts you would look at: forms, views, fields, columns, code events, lines of code. You easily can extract this information from a Notes database by exporting your design as DXL and then count the respective tags. Since that's a little boring let your computer do the counting. A few lines of Java will do the trick. Don't want to do that? Well, then just download this. It is sample code written in Java 6 (no Notes classes used), that will show all tag types as well as lines of code for LotusScript and @Formula. You can run it from the command line: java InspectDesign yourdatabasedesign.dxl or use it in your code.


Domino Server and Domino System Template Versions

QuickImage Category
Every Domino server version comes with its set of system templates provided by IBM, most notably pubnames.ntf and admin4.ntf (The full list of templates is in your Administrative help file). IBM's recommendation for server upgrades is to upgrade the administration server of the Domino directory first including the templates. I recently encountered a number of customers who have concerns upgrading the address book design when they have also older server versions in their network. While it is possible to prevent replication of the design, it is more trouble than worth the effort. So let me clarify a few pointers about Domino servers and Domino system templates:
  • The Domino Directory template and the other system templates are designed to be fully backward compatible. Checking the technotes you will find that we recommend to have the latest maintenance release in place before upgrading to a new main version.
  • Domino Servers are designed to be forward compatible. A Domino server will read only configuration values from the Domino directory it understands. New parameters are happily ignored.
  • A Domino server must run with the matching version of system templates. That it does run with older template version is lucky for you, but definitely not a supported configuration. And any of the new capabilities can't be used, since you can't configure them.
  • Maintenance releases are published for a reason (Go and checkout the Fix List). IBM will not backport or provide a hotfix for problems that have been addressed by a maintenance release.
So what can you do if you are not sure what version of the templates are scattered over your Domain?
The Admin help provides the list of system templates. First you need to check if you did any customization to your version. If so, separate them as described before. Then use the admin client or a little bit of script to remove them all. You don't want to mess with different touch dates, so a bit of radical surgery is in order. Install a new server somewhere (your thumb drive is a good location) to "harvest" the original template. You don't need to generate an id or configure it. Just install and copy the NTF files from the data directory. Use the admin client to add your server and admin groups, both in their native format as well as in square brackets to the ACL (using square brackets adds these groups to all databases newly created with that templates). Copy these templates to your administrative server (pros use Replication for that). Then use the admin client to drag & drop the templates to the other servers. This will replicate them over (your adminp needs to work properly, but it does doesn't it?). You might want to do that off-hours not to get in the way of regular replication activities.


Running xPages in a R7/R8.0 environment

QuickImage Category
"R8.5 and xPages ante portas!" Not before long and R8.5 will be released and you can deploy the shiny new Domino web2.0 application that created so much buzz. However you can make a safe bet, that your infrastructure team will stop you short: "Not so fast young man, we are not ready". Luckily you don't need to wait for a wholesale upgrade of the entire Domino infrastructure. You just need to add one R8.5 server:
There are a few pointers you need to be aware of:
  • The R8.5 server must use the R8.5 templates for all system databases, most notably the Domino directory. R6.5x and R7.0x server work with the 8.5 system templates. If upgrading the system templates is a problem (a real or perceived one) for the infrastructure team, configure replication not to update the design from/to the R8.5 server (you do that by changing the ACLs).
  • You need Domino Designer 8.5 for xPage applications. You can use Domino Designer 8.5 to design applications for R6.5/R7.0/R8.0. You just need to test them with the respective client (VMWare and friends are your friend. Or if you got a collection of PCs, use Synergy)
  • You must use the Domino Administrator 8.5 to administrate a R8.5 server. You can easily administrate R6.5/R7.0/R8.0 servers using Domino Administrator 8.5
  • xPages design elements don't need to be stored in the same NSF as the data. This means that you can design your web application without touching the original NSF design.
  • The data sources for xPages can be on a different server. This means you don't even need to replicate your existing databases to your new R8.5 server. However you do want a fast network connection between the two servers. For the R6.5x or R7.x server the xPages application is just another set of users coming in. Of course best performance means local access using ODS48
  • Never ever use a lower ODS than you can. ODS stands for On Disk Structure. Domino servers support older version of the ODS. You can use a R5 or R6/7 ODS on an R8 server. But it is not a good idea. The support is meant for the upgrade phase. You upgrade your server, the database then have the old ODS. You run compact and you are good to go (or my favorite: create a new replica). The ODS has no influence on replication. The only catch: you cant FTP a database back on a lower version of the server. This never should be an issue since skilled Domino developers don't FTP, they replicate.


Access protected Notes documents - RDBMS style

QuickImage Category
An interesting discussion happened today around Notes performance. In a rather large database (> 500k records) all documents are protected with Author and Reader fields. The access is rather narrow, so any user might see just about 1000 of the 500000 documents. Opening a view in the Notes client is rather slow and the old "rah. rah Notes is bad" song is performed. Notes performance is discussed in great length at other places, so this isn't what this post is about. I was wondering how one would implement access control on a record level in a relational database. This would be the specifications:
  • Design a view that restricts access to a subset of table data (we simplify here by excluding multi-value data fields)
  • A record can have zero or many readers, who are allowed to see the record
  • A record can have zero or many authors, who are allowed to see the record and later update them (eventually)
  • If a record has no readers any user with access to the database can see the record
  • If one or more readers are present only the sum of readers and authors can see the document
  • A reader or author can be of type: Person, Group, Role
  • A role can be assigned to one or more Persons or Groups
  • A group can contain Groups and People (we simplify here and omit the * operator)
  • A group can have zero or more roles
  • A person can have zero or more roles
  • A person can be member in zero or more groups
Graphically it would look somehow like this:
Now how would an SQL statement look like? I'm not an SQL expert, so I might get quite some stuff wrong. But here is my go:

SELECT * FROM maintable
   WHERE IN (SELECT readertable.maintableid FROM readertable
   WHERE readertable.entry = @CurrentUser
   OR readertable.entry IN (SELECT roletable.roleid FROM roletable WHERE roletable.entry = @CurrentUser)
   OR readertable.entry IN (SELECT grouptable.groupid FROM grouptable WHERE grouptable.entry = @CurrentUser)
   OR readertable.entry IN (SELECT roletable.roleid FROM roletable WHERE roletable.entry IN
   (SELECT grouptable.groupid FROM  grouptable WHERE grouptable.entry = @CurrentUser)
   OR NOT IN (SELECT readertable.maintableid FROM readertable)
   OR IN (SELECT authortable.maintableid FROM authortable
   WHERE authortable.entry = @CurrentUser
   OR authortable.entry IN (SELECT roletable.roleid FROM roletable WHERE roletable.entry = @CurrentUser)
   OR authortable.entry IN (SELECT grouptable.groupid FROM grouptable WHERE grouptable.entry = @CurrentUser)
   OR authortable.entry IN (SELECT roletable.roleid FROM roletable WHERE roletable.entry IN
   (SELECT grouptable.groupid FROM  grouptable WHERE grouptable.entry = @CurrentUser)
And that's without taking into account that a group could contain a group. Looks like a performance pig to me. Luckily in Domino we can use categorized views to make access fast. Of course I'm happy to learn that there are smarter SQL queries around.


The multi-faced nature of the NSF

QuickImage Category  
I'm running the XPages enablement workshop in Beijing this week. Having to teach other about new functionality guarantees either dispair or insight. Today I had a rather insightful moment. Very often the NSF gets blasted for being a "flat-file-non-relational-whatever" file. When having a closer look at the format however you will realize, that it is a quite amazing part of technology:
  • Build to function as a "whatever-you-want-to-dump-into" data store
  • Build backward compatible. I still can open R2 databases in my R8.5 client
  • Build to run on a client and on a server with high concurrent access
  • Build to be robust: event if you tourture it (diskspace, sectors etc.) it will mostly recover
  • Build to provide fulltext search access
  • Build to be what you want it to be: if you happen to be a Notes client or a Domino server, it is a repository for design elements and a document database. If you happen to be the Eclipse IDE (or a webDAV client) it is a file system. If you happen to be the XPages server task it is a WAR application store. And if you happen to be David Allen, it is your trusted system for Getting-Things-Done


Fields in Forms Matrix

QuickImage Category
When cleaning up existing applications it is good to have an overview what fields are used across forms. Unfortunately the synopsis isn't very helpful there. However with a few lines of LotusScript one can create a matrix that serves as an overview. It is a comparison by name only and doesn't tell you anything about computation mode or data-type. But it is a start. YMMV


xPages Overview

QuickImage Category
xPages Overview
Letting the xPages workshop sink in for a few days, I'm now preparing to spread the love. If you happen to be in Beijing in October 15-17 2008 you could join me for the first AP xPages workshop. It will be designed as a T3 (which stands for Train-The-Trainer), so you will not only learn what xPages is all about, but also how to spread the love. I think xPages will not only be a major overhaul how existing Notes shops develop web applications, but it also offers the unique opportunity for new developers to get started on an attractive platform. What makes xPages so special?
  • You only need to learn one scripting language. Both client and server run on JavaScript
  • You can move code from client to server to client easily. It is after all the same language
  • You have the comfort of a fast turn-around (like all scripting based web pages) combined with access to all things Java
  • The database is right baked in. So no tedious xDBC configuration or scaffolding or ORM mapping. Submitted forms become documents
  • Domino's security right baked in. You just declare the fields that control access and you are done (Author and Reader fields for the non-Notes readers here)
  • Ajax everywhere: You can just specify that any action should refresh only a given set of elements and xPages will take care of the Ajax calls
Once you get into xPages the list will get longer. So stay tuned.


Generate xPages from your views

QuickImage Category
The Dublin workshop has concluded. Others have reported and I start playing with automating upgrading your existing applications to include xPages. In a nutshell: xPages allows scripting your application end-to-end in JavaScript. JavaScript on the client and the server and just a selection for you to decide what runs where. Of course JavaScript is not a Domino server languages in older Notes versions, so you need to do something about your @Formula and LotusScript. But that's a topic for another time. xPages are stored as XML. Existing Domino design elements can be exported with reasonably accuracy as DXL which is XML too. So with a little XSLT the both might be fit onto each other.

A Domino view starts with <view name=', while an xPage starts with <xp:viewPanel. Different names but similar structure! A column in a Notes view is represented by <column itemname=, the column in the xPage with <xp:viewColumn columnName=. Again just different names. To get from all your views to xPages follow the following steps:
  • Create one xPage that looks the way your stuff should look like
  • Switch to the source view, cut and paste it into your favorite XSLT editor
  • Replace the variable parts (the columns) with XSLT template logic. Save the file into [NotesData]/xsl
  • Use the DXLTools - Transformer to apply that one by one to your views (you could script that)
  • Switch to the Java perspective and import the resulting files back in your NSF


How much bandwidth does Sametime need?

QuickImage Category
A question that pops up quite regularly is how much bandwidth does Sametime need. As a rule of thumb you can use this values:
  • A one to one text chat need about 0.2kBit/sec, yes that would be 200 baud. What worked in the 300Baud modem days with your favorite BBS hasn't changed: chat is cheap. Presence transfer consumes about the same (Your Netbios broadcasts probably slanders 10 times the bandwidth. If you transfer files or images (pasted into the chat) bandwidth consumption will peak as: get what you can get and finish the transfer
  • Voice chat requires around 10kBit/sec per connection. A two way chat has one connection, a three way chat two (n-1 actually, you don't need network bandwidth talking to yourself)
  • Video chat consumes 256kBit/sec per connection. The same rule as for voice chat applies. A threesome thus requires 512kBit/sec.
The Sametime codecs have adaptive features and you can tweak the video quality, so YMMV.
Thanks David


Where to install Lotus Notes - and what to do when upgrading?

QuickImage Category
When it comes to installing Lotus Notes there are a lot of conflicting ideas floating around. I'd like to shed a little light on them. Existing Notes shops typically have their Notes clients installed in C:\Notes and the data in C:\Notes\Data. This selection predates the Win95/NT/XP days. While that is working perfectly fine it makes Notes a little the odd child on the block, since all other software gets installed in C:\Program Files.
When you install Notes on a clean machine it will suggest C:\Program Files\IBM\Lotus\Notes for the program file and the data in C:\Program Files\IBM\Lotus\Notes/Data. While the first suggestion is perfectly fine I would not recommend the second. Data has no place in a program directory. Interestingly there are other options available. When you install Notes using the multi-user install Notes suggests C:\Documents and Settings\<username>\AppData\Local\Lotus\Notes\Data (on Vista C:\Documents and Settings would be C:\Users). Of course the directory names are not hard coded but pulled from the environment variables ProgramFiles and USERPROFILE. And I think that is the place where it should be in any case.
When you look at the installation on Linux, there the program goes into /opt/ibm/lotus/notes and the data in /home/<username>/lotus/notes, with shared data (the templates/help) sitting in /var/...
When you upgrade your Notes client, be it manually or using Smart-Upgrade you will realize, that the installer suggest the existing directory, thus perpetuating location decisions that might no longer be appropriate. I highly suggest to take an upgrade exercise as opportunity to rectify such decisions. How would you go about it? The best approach is to take a step back and see an upgrade exercise from a holistic point of view. You need to:
  • Gather evidence that installation will work (hardware requirement, available disk space)
  • Enlighten the users what's coming up/is new
  • Do the actually update
  • Verify the results
    One successful approach is to create a database with information around the upgrade: Users and machines, training schedules etc. Send users an invitation email that lets them know that the upgrade exercise is planned and will happen and that they are required to attend a briefing for the new features (could be a briefing using Sametime or Sametime Unyte). Make them enroll to one of the briefings by filling in a form in that database. Give enough day/time slots to get them all. The enrollment form contains code that does a preliminary workstation check: list of databases, install locations, free space etc. and records it in that database. This allows you to generate the needed scripts to clean up stuff. On the day of the briefing send another email to the user with "your session is on in xx minutes, click here to confirm your attendance". The email function would  obviously would be a button in your database. That confirmation button would then trigger the update proceedings. If you need to move the data or program files before the upgrade you need to run a little custom script that does that for you. It also would need to iron out the Registry. I personally made good experience with NSIS (given some time I'll post some samples).
    Then, while the user is attending the briefing the upgrade happens and you have the best benefit of learning: users can immediately after their training use the new functions. You need to brief long enough to give time for the installer to run. The last item the database would contain is a rating and satisfaction feedback form for the whole process.

    Some common problems when picking install locations for Lotus Notes:
  • Installing data into the user profile and have XP sync that profile on logon/logoff. This means that you shuffle your entire Notes data on every logon/logoff since all file dates will be updated when Notes runs. Exclude the directory from sync
  • Installing data into the home drive on a file server. This is a big performance hit. If you do that, at least point desktop.dsk and cache.dsk to a local drive. The rationale often is backup or the ability to work on multiple workstations. If that is a requirement you better have a look at multi-user install and the roaming user feature. This yields much better results. Storage requirements are not different, the data just moves from your file server to the Notes servers. Additional you then have access to all address books and bookmarks in the context of your Domino server and can easily run analysis and update code.
  • Low hard drive maintenance. Notes databases are big compared to your average file on your workstation (having archives or replicated shared databases in the multi-gigabyte range is quite common). NTFS does a poor job (as does ext3 for that matter) in dealing with big files. They get fragmented very fast. So having a regular defragmentation running, especially before you upgrade makes a lot of sense.


Connecting Sametime to MSN

QuickImage Category
Disclaimer: The following considerations are hypothetical and are nothing IBM does recommend, support or implement. This are purely my little musings how something eventually could be done if someone would really want to. It also is a technical musing only and anybody acting on this musing better should study MSN's usage agreement and have a word with their lawyers.

So let us begin....
The Sametime gateway that connects a Sametime server to AIM, Yahoo or Google does currently not connect to MSN. This is not a technology shortcoming but the lack of a federation agreement between IBM and Microsoft. This post is no place to speculate who's fault that is. At the core of the problem is federation. When you use Sametime you have one identity, that makes you visible in all networks. Using a client side solution like Pidgin, Digsby, Adium, Trillian or others requires an account with each chat provider. Also it requires your firewall to be open for all these protocols for all workstations. Furthermore you won't be able to have a central logging facility which is a compliance and audit requirement in a lot of industries.
The solution approach: If Sametime traffic could run through something that handles the transition between federation and individual accounts it might do the trick (Important: this is a technical, not a legal statement!). Enter XMPP. The Sametime 7.5.1 gateway could connect to GTalk which is XMPP based. The Sametime 8.0 gateway can connect to any XMPP based server (however there is no ibm-supports-the-following-smpp-servers list). If that XMPP server connects to MSN using MSN IDs a Sametime user can talk to an MSN user. For the MSN user it will look like an MSN account (I once registered my passport account using a regular email address, so it doesn't need to be or, it can be It would work somehow like that:
I spoke to Artur Hefczyc from the Tigase project. He confirmed that Tigase has a working MSN transport and that he would work with anybody to get that working. Since Tigase is an open source project, he needs to be paid for that work.

Update: Fixed the X vs. S typo.


The Lotus WIKIs are here

QuickImage Category
Steve Casteldine's blog template which is IBM's official Domino blog since Domino 7.02 got a recent boost in functionality: it now also serves as a WIKI. The Lotus team announced availability about a month ago. The following WIKIs are available: Nice side effect of choosing Domino: there is a decent Notes client UI for the WIKI and I can take it offline to places where Internet doesn't reach (yet). There is no WIKI for Notes and Domino. Given the rich material available all over the net, that wouldn't make sense (would it?). A link collection maybe, for an aggregation check PlanetLotus.


Securing your CxO's mail file

QuickImage Category
In a recent customer session the question popped up: How can you really secure your inbox against unauthorized access including access by nosy admins. The admin part can be particularity hard since by definition administrators should have the ability to administrate which can mean to access your file. In Domino you actually can lock down the environment, even if you have to go quite a number of steps. It is a typical case of "who controls the controller" This is my wash-list:


So you want to be a Domino developer?

QuickImage Category
The good news: most of the skills that will help you to excel in Domino are generic and can be applied to any development environment.
The bad news: there is a lot of stuff to learn. I'm compiling a roadmap for (Domino) Developers wannabees taking a little broader approach. This is my first draft:
Development skills required for development in general and Domino in particular
In the coming days and weeks I will discuss/fill each of this circles with details and recommended readings/training material. Feedback is highly appreciated.


Notes and Domino's most wanted

QuickImage Category
It is all out there. Sometimes it is hard to find. Support has compiled various lists with "most wanted documents": Enjoy!

Bonus Track: The official history of Lotus Notes and the wiki version


Creating SQL Statements from form definitions

QuickImage Category  
I had an iteresting discussion with a customer this week. They use Domino and dotNet for their web applications. Their decission criteria when to use what: if the data of the application needs to be fed into their data warehouse at a later point of time, they use dotNet since storage there typically ends up in an RDBMS. The biggest problem they face, in their own voice: "Our users are pretty spoiled from Domino. They expect days as turnaround time for applications. Using dotNet it takes at least three times longer."
So I asked why they don't use DECS to connect to the RDBMS. They could develop the application in Notes/Domino and once the app does what the user wants just add the tables in the RDBMS and link them up using DECS. They asked back if there is a way to generate the table or at least the create table statement from Domino directly. The short answer: Yes, you can, however you need to make decissions on datatypes and field length. The long answer: you need Domino Designer (for the Tools - DXL Utilities - Transformer ... menu) and a little XSLT stylesheet.


View Selection Formulas

QuickImage Category
Application performance in Notes and Domino can greatly vary depending on the number of views and the view selection formulas you use. When inheriting databases applications for maintainance there is no real easy way to get an overview what view selection formulas have been used. So I did write myself a function that creates a document with such an overview table.See the function below. To test it I simply copy it into an agent and call it for the current database.
Of course you could think of running it against multiple databases or altering the html with some Ajax stuff to make it sortable. Here is my test agent:
Option Public Option Declare Sub Initialize Dim s As New NotesSession Dim db As NotesDatabase Dim doc As NotesDocument Set db = s.CurrentDatabase Set doc = db.CreateDocument Call ReportViewSelectionFormulas(db, doc) Call doc.send(False,s.UserName) End Sub
This LotusScript was converted to HTML using the ls2html routine,
provided by Julian Robichaux at
Update: Sorry folks, got the wrong code in the core functionsm fixed now. Enjoy!


Domino Design Code Injection - Part 2

QuickImage Category
In Part 1 I introduced the approach to code injection and some sample LotusScript Code to pull a design element, transform it and push it back. Missing there was the function that actually implemented that method. I'm using a Java class to do that, which I wrap into LS2J. The LotusScript function looks like this:
Option Public Option Declare Uselsx "*javacon" Use "CodeInjector" Function injectCode(rawDXL As String, xPath As String, codeSnippet As String) As String 'Dummy doesn't do anything Dim jSession As JAVASESSION Dim injectorClass As JAVACLASS Dim injector As JavaObject Set jSession = New JAVASESSION injectCode = rawDXL Set injectorClass = jSession.GetClass("DominoXPathInjector") Set injector = injectorClass.CreateObject 'Now set the document Call injector.setDocument(rawDXL) 'and execute the function injectCode = injector.injectCode(codeSnippet, xPath, 4) End Function
This LotusScript was converted to HTML using the ls2html routine,
provided by Julian Robichaux at
Now let us have a quick look at the Java class.


Domino Design Code Injection

QuickImage Category
In an earlier entry I discussed how to maintain custom system databases. It works well when you add design elements or change them wholesale. However it doesn't work very well when you tweak design elements. Something like: adding one action in a form or using a different subform for a specific task. Once an updated system template is available, you have to go back and recreate all your changes in your template.
This entry discusses how to automate these little fixes. There are a number of moving parts involved:
  1. Export the design element using DXL. (You might need to set NotesDXLExporter.RichTextOption = RICHTEXTOPTION_RAW to make sure the fidelity of forms is maintained)
  2. Locate the entry/replace point. The best mechanism for that is XPath
  3. Locate the new code. This could be as easy as provide a string with the XML Snippet to pointing to an existing design element elsewhere. (We would use XPath again)
  4. Perform the insertion/replacement. We will have the option: before/after/replace
  5. Write back your changes and test it
What type of code injections come to my mind:
  • Add an action to a form/view
  • Add a subform to a form
  • Replace a validation formula in a field
  • Insert a script library and a custom class into a form
  • Add a field at the top or bottom of a form (our favorite hidden fields $$Return anyone
Understanding DXL and XPath is key to make this work. As a Domino developer (and you are one? Otherwise all this might sound Greek and Latin to you) you have seen DXL and its description in the help file. XPath might be less familiar. In a nutshell: XPath is to XML what SQL is to relational data. When you develop XML Stylesheets (a.k.a. XSLT) you use XPath selectors to define your template matches. XPath is a set language, so you better remember your kindergarden days (give me all shapes that are blue, triangle and big). It is easy to get an overview and to get started and can get as complex as you can take it. One little caveat (you see Latin is creeping it): XPath is case sensitive
Some simple examples:
  • * selects ANY element.
  • / selects the root element, basically before any tags in the XML
  • /database/form/actionbar selects the actionbar elements in all forms in our XML
  • /database/form[1]/actionbar selects the actionbar element in the first form in our XML
  • /database/form[@name='Message']/actionbar selects the actionbar element in the form Message in our XML
  • /database/form[@name='Message'] //field[@name='Subject'] /code[@event='defaultvalue'] /formula Select the default value formula for the field "Subject". Please note the double slashes in front of the field. They are intentional. One slash says: a child, two slashes says any decendant down the document. Also: no spaces before/after the slashes. They are here for readablitity only.
  • //field[code/@event='inputvalidation'] Any field that has an input validation formula
You get the idea. You want to have a good XML editor.


Reply-To-All Redux

QuickImage Category  
From the "Your-Notes-client-is-different-department".

When replying to an email Notes does some address shuffling for you. As long as you have one sender and one recipient, it is quite logical. The original sender become the recipient, the recipient becomes the sender. Everybody in the cc list stays in the cc list.
Incoming eMail to Send eMail

However that doesn't work anymore when you had more than one recipient in the To field. The general rule (as I once understood it) for when to place in To and when to place in CC is:
- To: This is an actionable item for the receipient
- CC: for information only. (In this days it is more like a "Just in Case Copy)

So what should your email system do, when there is more than one person in the TO list?

a) Presume that your reply to the original sender is actionable for the other recipients too?
b) Presume that your reply to the original sender is for information only for the other recipients?

Notes uses presumption b) while other email systems use presumption a). So you go from:


Vote for DXL!

QuickImage Category
Rocky posted a great idea on IdeaJam: Make DXL complete and roundtrip save. I'm a big fan of DXL and really really want that feature. So go over to IdeaJam and cast your vote!


The making of a download script

QuickImage Category  
Based on the feedback on my little download script I'd like to share how I created it.
Step 1: Have a look at the database that hosts the downloads:
Step 2: Pick a view
Step 3: The same view as XML and all of it.


JavaViews in your own applications

QuickImage Category
The R8 inbox, calendar and contacts take full advantage of the new Java views. However support for your application has been postponed to the NMFR*. Memento bene: "support for". That doesn't mean that you can't do it. After all the mailbox is just a Notes application. The biggest challenge: in the normal Designer there seems to be no way to specify the additional properties for the Java views like side preview or the business card view. Actually there is. In the R8 reviewers guide on page 155 (that would be PDF page 163) a new notes.ini parameter is documented:


Add this parameter to your local notes.ini and designer will reveal new properties in views (the (i) tab in the view and the propeller head tab for a view column). The redpaper also mentions what you can do with this. One interesting piece: Currently we have a viewer type: "Table" and one "Tiled". You can specify both and you will get a switcher (like in contacts). The API will be documented, so you could render your very own views (Pivot anybody?). However if you add all this properties nothing happens. A few pieces are missing (and since it is not automatic, there's no support yet). Again the redpaper helps: on page 146 (PDF 158) there is a note, that reminds you that you can link a frameset to a composite. So what you need to do for your JavaView enabled views is to use a composite application. That is actually nice, since you have full backward compatibility: Prior to R8 the frameset opens. In R8 the composite opens.


NotesStream.write != NotesStream.WriteText

QuickImage Category
Mental Note to self:

If you write something like:
the compiler will not warn you, that you shall not provide String where Byte is needed. But the runtime will take care of you! It will terminate the execution of your subroutine (not the entire script, just the subroutine) without error message, because you shall not write String where Byte is needed and before you commit this blunder again the routine terminates. And don't get me started on Streams, Transformers and character sets.. With this little hickups Blizz is progressing.


Link to me: Notes Document Links as URL, Notes Link and Attachments

QuickImage Category
In the beginning the world was easy. You wanted to notify a user, just use @MailSend with [IncludeDocLink] and your email notification was done. It was obvious if your application was on a Notes client, your messaging application would be Notes as well.
Today the picture looks different. You might access your email through a portlet on the intranet or through a POP3 client. But you still want to be able to open that link in the Notes client right? I'm not aiming at web enablement here. What are your options:
  • In a Notes client you can use NotesRichText.AppendDocLink
  • You could send a link with notes:// ...
  • You can create an attachment as NDL file: To see what you need in there you copy a document link and paste it into Notepad. You will get this result:

    Wissel's Address Book - My Contacts
    <REPLICA 4825713F:00352EEC>
    <VIEW OF85255E01:001356A8-ON852554C2:00753106>
    <NOTE OFE3DB4DB9:F46B6DAE-ON482572F9:00366DC6>
    <REM>Wissel's Address Book</REM>

    Looks utterly familiar. Inside the NOTE the document uniqueID is embedded similar to the value in the property box. The stuff around the NotesDocument.UniversalID is static, so we can construct that. The view is the uniqueID of the design document. Luckily you can replace that with the name or alias of that view
Since all 3 cases are needed over and over again I added a little utility class to my script tools.


Playing with Notes Client form layout

QuickImage Category
In a recent training session a participant asked why Notes forms are so ugly. Naturally I pointed him to Chris. Nevertheless he got me thinking. How many steps does it take to make a Notes form looking nice. Let us start with a very typical basic form:

Default ugly form

The first thing will be the background for toolbar and form. I borrow some style from Lotus Connections: adding background images leads here:

Slightly less ugly picture

Next step is to take care of the radio buttons, mandatory fields and the borders.


Do you need help for your Notes8 composite applications?

QuickImage Category
This is an IBM announcement our marketing manager asked me (very nicely) to publish here.

The recent release of the IBM Lotus Notes and Domino 8 platform gives developers more integration and business solution options than ever before. To celebrate this extensibility we would like to offer our Business Partners free access to Lotus development specialists who will assist you in creating Notes and Domino 8 business solutions.

Your existing solutions will continue to run in Lotus Notes and Domino 8 without modification -- all new integration options supplement existing application development capabilities. Our Lotus development specialists will work with you to explore possible integration scenarios and guide you through the details of this new set of integration options including:

The Composite Application Framework
The Lotus Notes 8 platform allows developers to build reusable, loosely coupled components which can be assembled into composite applications. These reusable components can be created from NSF, Eclipse or IBM Lotus Expeditor technology. Our specialists can evaluate your requirements and propose strategies for developing reusable components or improving integration with a composite application.

The Sidebar and Toolbar
The Lotus Notes 8 sidebar and toolbar are in the end user's sight at all times so they provide an excellent opportunity to display a ubiquitous view of your application or service. Our specialists can show you how to integrate components of your solution into the sidebar and toolbar most effectively. Work you've done developing for Lotus Sametime Connect will pay dividends here as well -- most plug-ins developed for IBM Lotus Sametime 7.5.1 are compatible with Lotus Notes 8.

Web Services
Lotus Notes and Domino 8 provide comprehensive support for web services producers and consumers on the Notes client and Domino server. Our specialists show you how to incorporate your web services in Notes and Domino applications to capitalise on existing IT investments and realise a service-oriented architecture.

Lotus Domino and DB2 Integration
Lotus Domino 8 provides a fresh opportunity for Business Partners with relational database experience and solutions. With Lotus Domino and DB2 integration developers can expose NSF data for relational use and list DBMS data in views and embedded views. Our specialists can discuss the possibilities afforded by this integration such as report generation and integration with other relational data stores.

To learn more about all that Lotus Notes and Domino 8 has to offer, please have a look at " Building on the evolution of people-centric applications" or " Creating business flexibility with IBM Lotus application development software" on the Lotus Notes and Domino developer site. For more information on composite applications see the composite applications page on developerWorks, or the composite applications blog.

If you'd like to find out more about how we can help you take advantage of Lotus Notes and Domino 8 please email us at We'll coordinate a follow-on call to review your needs and get you started right away.

At Lotusphere 2008 we will be looking to highlight Business Partner solutions that exploit the new integration possibilities in Lotus Notes and Domino 8 - including a new award category for these solutions. We encourage you to take advantage of this offer while it's available in preparation for Lotusphere.

Thank You,

Tim Shortley
Program Director, Worldwide ISV Technical Enablement, IBM Lotus Software


Lotus Notes Documents as RDBMS schema

QuickImage Category
I tried to draw a generic RDBMS schema description for a note. This is my first draft:
Notes as RDBMS schema


Switching between Domino server versions (Windows Edition)

QuickImage Category
When you upgrade a server there is always the possibility, that something goes wrong (you did a full backup,did you?) when running the new version. So you need to have a "Plan B" how to fall back to the original version. When upgrading from Domino 6.x to Domino 7.x or 8.x you can prepare a small script that allows to automate the fall-back and the fall-forward. This is what you need to do:

Let us presume your Domino program files location is D:\Domino and your data location is E:\Domino\data
  1. Shut down your Domino server
  2. Copy the program directory:
    COPY D:\Domino\*.* D:\Domino6\*.* /S /E /V
  3. ZIP away all system templates in E:\Domino\data\*.ntf (We will restore the ones that are not provided in the update
  4. Install Domino8 as update into D:\Domino and E:\Domino\Data
  5. Unzip the templates from Step 3. Do not overwrite exisiting files. Older versions of Domino run fine with the new system templates but the new Domino servers must have the new templates.
  6. Create the file SwitchDomino.cmd in a convenient location (Desktop?) with the following content:
    NET STOP  "Lotus Domino Server (Dominodata)"   <<<---- You need to check the exact name of your service 
    IF EXIST D:\Domino6\*.* GOTO R8TO6
    IF EXIST D:\Domino8\*.* GOTO R6TO8
    ECHO Something is wrong - NEITHER D:\Domino8 nor D:\Domino6 seems to exist
    RENAME D:\Domino Domino8
    RENAME D:\Domino6 Domino
    RENAME D:\Domino Domino6
    RENAME D:\Domino8 Domino
    NET START  "Lotus Domino Server (Dominodata)"
  7. Restart your server
If you need to fall back you just execute the cmd and after a few minutes you switched versions. Caveat for R8: R8 can use the ODS48 format. Once you are there, there is no fast way back since you would need to use compact to revert to the previous ODS format which can take quite some time.
Remark: You do data backup do you?

On Linux systems you play with symlinks, but that's a story for a different time.


How much attachments are on your servers?

QuickImage Category
In a recent discussion a customer asked how much space could they save on their mail servers if they move all attachments to Lotus Quickr or an content management solution like IBM DB/2 CommonStore for Domino. While you can inspect a database easily for size and number of documents it doesn't tell you how much of this size is attributed to attachments. Since altering the mail template to add an attachment view was out of the picture I whipped up a little script, that will scan an entire server and report back the size figures into a database:
'ScanForSize: Option Public Option Declare Use "SizeTools" Sub Initialize Dim s As New NotesSession Dim db As NotesDatabase Dim curDB As NotesDatabase Dim dbDir As NotesDbDirectory Dim serverName As String Set db = s.CurrentDatabase If db.Server = "" Then serverName = "(local)" Else serverName = db.Server End If serverName = Inputbox("Please select the server to scan (local) for local","Server Scan",serverName) If serverName = "" Then Exit Sub End If If servername = "(local)" Then Set dbDir = New NotesDbDirectory("") Else Set dbDir = New NotesDbDirectory(serverName) End If Set curDB = dbdir.GetFirstDatabase(TEMPLATE_CANDIDATE) Do Until curDB Is Nothing Print "Working on " & curDB.FilePath Call countThis(curDB, db) Set curDB = dbDir.GetNextDatabase Loop Msgbox "Run completed" End Sub Sub countThis(curDB As NotesDatabase, reportDB As NotesDatabase) Dim curSize As DBSizeInfo On Error Resume Next Set curSize = New DBSizeInfo(curDB) Call curSize.count 'We save only database information that has an attachment If curSize.AttachmentCount > 0 Then Call End If End Sub
This LotusScript was converted to HTML using the ls2html routine,
provided by Julian Robichaux at


Lotus Notes Tips & Tricks in Thai Language

QuickImage Category
My colleagues Pong Yawagun and Nithitus Upatumvipanon from IBM Thailand have started their Lotus Notes Blog. They will provide their insights into Lotus technologies and also translate articles from others (including this blog) into Thai language. Go and welcome them! And yes NotesSensei seems to go into franchise.


20 Easy Steps to Screw Up a Domino Upgrade

QuickImage Category
Domino upgrades are comparable easy and can be quite fast. Of course you need to know what you are doing. There might be times when you don't want to have a smooth ride. Like when you are paid by the hour or when you want to move to a different platform. In this case this post is for you. Follow the following easy to implement steps and your upgrade will be an utter failure. But be aware, you have to set the stage before the upgrade, voicing concerns, so you can blame it on IBM when things start to fail --- and they will.
  1. Do not upgrade the admin's workstation. Using the latest version of Domino administrator would actually give you access to the latest feature, avoid this
  2. Make sure, that you don't have access to all servers and that you are not a system administrator of the domain
  3. Make sure, that you can't reach all the servers from the admin workstation
  4. Start upgrading the most critical application server first, and the hubs and the administrative server last
  5. Make sure you have enough time so the design task on an old server can undo all design changes the upgrade process might have applied
  6. Make sure you have touched and customized all system templates, so their change date is younger than the new version. Also make sure the templates don't replicate everywhere. A mess in the system template will give you weeks of (billable) fun
  7. Make sure, that your system databases don't replicate or replicate outside the deletion stub life cycle interval
  8. Never ever upgrade the design of the Domino Directory or other system databases. If you do so the upgraded server actually could use other settings than defaults.
  9. Try to make sure that your ODS is at least one version behind. This keeps the indexer busy.
  10. Yes a full hard disk is also a good idea: if the available space is less than your biggest database and less than 20% that is good
  11. Enable disk compression. Having the OS scream when it needs to uncompress notes databases on access
  12. Use a desktop virus scanner on your server. Make sure it scans every file on access
  13. Don't uninstall any 3rd party extensions (like the virus scanner or compliance plug-ins). It is more fun to search for such issues. And while you are on it: don't use the latest version of 3rd party apps, they actually might be compatible with the new version of Domino.
  14. Enable transaction logging with either the OS or the data drive as destination. Hard disks should spin hard.
  15. Make sure you don't apply your OS level service packs or patches. This is especially important on W2K3 and W2K3/64.
  16. Do not apply the latest version of your source version. (LIke 5.0.13a when you come from R5, or 6.5.6 when you come from R6.x) This is especially fun when skipping a version
  17. Make sure that the messaging people don't talk to the apps people vice versa. Also don't give notice to the apps people that you do the upgrade
  18. Do not maintain documentation which applications use which system template for lookup. Anyway documentation is for whimps.
  19. Be creative. Look for best practices in Redbooks, on the IBM site and other online locations. Put a "not" or "don't" in front of every recommendation.
  20. Don't use fixup or compact before or after the upgrade.

This list is by far not complete. So if you know another way to screw your upgrade, let me know.


Customizing system templates

QuickImage Category
Everything in Notes can be customized by tweaking with the design. I highly recommend never to tweak design in a database itself, but use Domino's template capability. You can have multiple databases using the same layout. Even if you build a specialized custom application you will have at least 3 databases sharing the same design: development sample data, UAT sample data and production data. So use template, and NO hiding the design is not a nice thing to do and actually not a security feature (it's rather an obscurity feature).
In a lot of installations I came across system databases have been altered: the Domino directory, the log, the mail template etc. IBM's support always has a hard time helping in this environments. There is a rather strict policy to insist on original IBM templates. Following a few easy steps however you can eat your cake and have it too. The secret is to use the template system wisely. Following the steps has quite some advantages:
  • Your customizations are protected from being overwritten since they are sitting in their own database
  • Your customizations are visible since they are sitting in their own database
  • Since the original templates are untouched future fixes are easily included in your actual database
  • You can switch back and forth between the IBM template and your customized template using "replace design". This is extremely useful when troubleshooting issues: switch to the original IBM template, if the problem is gone you know where to look. If the problem persists you don't need to argue with support about customization having an impact

These are the steps:


Syncing documents in 2 views

QuickImage Category
I got quite some requests on the details of the syncing documents in views pattern/anti-pattern.So this is what you need to do:
  1. Have 2 views with identical keys, sorted by the key
  2. Julian's OpenLog database
  3. The main function SyncAllDocument
  4. 3 auxiliary functions: getCompareKeySource, getCompareKeyTarget and updateSourceDoc
In my example code I made the assumption, that the documents contain the same fields and have a single key. You might have different requirements like computing things and/or only coying part of the items or do a computewithform before saving. Nevertheless you don't need to touch the main routine for that. You get the idea.


Domino Anti-Pattern: Delete and Recreate

QuickImage Category
When reviewing agents or trouble shooting mis-behaving databases I come across a popular anti-pattern for Domino: delete and recreate. Anti-pattern are well known failures for common known problems. "Delete and Recreate" can be found in report generating applications or databases that pull values from other sources like RDBMS or flat files. Mostly you can find that pattern when developers came from a RDMBS background where deleting equals the total removal of information. In Domino things are a little different. When you delete a document something remains back: a deletion stub. A deletion stub is the DocumentUniqueId plus a flag that says "I'm a deletion stub". The stub is then replicated to other servers or clients to also remove this document in the replica. The default life span of a deletion stub is 30 days.
In a recent analysis I was looking at a database with just 400 documents, that was big, slow and prone to crashes. On closer examination I found an agent performing this pattern on the 400 documents every 30 minutes. Do your math: 400 x 48 x 30 = 576000. Quite some baggage for just 400 entries.
So what are the alternatives?
Requirements like this are often solved with another anti-pattern: "Loop through a big loop and do a dbLookup (or getDocumentByKey) for each iteration". But there is an easier way: Typically there is a sorted key (customer number, part number, record id etc.) that would be available in source and target. Create a collection for source and target and use what I call "The Tango" or "The Wiggle". The pseudo code looks like this:
  1. Read first source key and first target key
  2. Do until you run out of keys
  3.  if source key is equal target key: call sub routine comparing the two records how they have to be compared and update and save target if it has changed (but not otherwise)
  4. if source key is bigger than target key: get next target key and delete current target
  5. if target key is bigger than source key: insert source document into target view and read next source document
  6. End of the do loop

Of course your production code needs to handle the case that you run out of source keys (= delete all remaining target documents) or target keys (=insert all remaining source documents).
This way you don't create unnecessary zombie deletion stubs.


A CRUD webservice - Part 2

QuickImage Category
In Part 1 of this Show-N-Tell Thursday I showed how to generate a WSDL definition out of a Notes form. While this is nice, it leaves quite a bit of work to actually implement the service. In this installment we take the opposite route. Instead of generating a WSDL definition, LotusScript code will be generated out of a Form definition. Unfortunately DXL doesn't support Web services very well (at least up to R8.0), so I will generate a text file that contains pure LotusScript code. This text file needs to be manually imported into an empty Web service (Copy and paste will do).

I generate a few classes including a custom class which you can overwrite to implement your specific additions to the code. You might want to put that into a different library, so you keep your customization. Nota bene: the generated code is not real production quality. I haven't implemented good logging or error handling ( OpenLog anybody). Once you see the code you get the idea and can complete that easily (feedback would be nice).

A web service requires a LotusScript class with public members, properties and/or functions. Domino Designer will then generate a WSDL file for us. One path to travel would be implementing generic code, that takes fields as name/value pairs. While tempting this has the clear disadvantage, that the web service becomes rather unspecific and we can't take advantage of a strong validation using XML schema. So my approach is to analyze the form and generate 2 XML structures: one for all input fields, used to create and update a document and one for all fields used to read a document. I don't take hide-when formulas into account.

There are a few challenges to overcome. First we need to deal with multi-value fields. In WSDL these are expressed as Arrays. Secondly we need to map the Domino data types to WSDL data types. Luckily half of the work is done since Domino includes "lsxds.lss" It translates WSDL data types into LotusScript. From LotusScript to NotesItems is a path well travelled.

Let's have a look at the XSLT stylesheet (if you don't know how to translate DXL using XSLT use this plug-in -- you are using FireFox are you?)

To instruct the XSLT processor to output plain text rather than XML or HTML we use this statement:
<xsl:output indent= "yes" method= "text" omit-xml-declaration= "yes" media-type= "text/lotuscript" xml:space= "preserve" />

Then inside the root tag the LotusScript Code is written as you would write in Domino Designer.
  <xsl:template match= "/" >
        <!-- We use the build-in data type mapping for LotusScript agents -->
        %INCLUDE "lsxsd.lss"
        '  Autogenerated code to be embedded into a Domino Web Service
        Public Class form <xsl:value-of select= "$formName" />

We have 2 levels of comments here. XSLT Comments looking like this: <!-- ... --> and regular LotusScript comments starting with '.
Whenever we need to get specific we can use <xsl:value-of to pick a value from the DXL form.

The class we will expose into the webservice is, other than its name and the content of the parameters pretty standard and (IMHO) self explaining:

Public Class form <xsl:value-of select= "$formName" />
            '* createData takes in the custom form data and creates a new document.
            '* the data given as parameter matches all input fields of the form
            Public Function createData(formData as <xsl:value-of select= "$formName" /> InputData) as ResultCode
                    Dim curData as new <xsl:value-of select= "$formName" /> CustomData
                    Set createData = curData.createData(formData)
            End Function

        Public Function createAndReadData(formData as <xsl:value-of select= "$formName" /> InputData) as ExtendedResult
                Set createAndReadData = new ExtendedResult
                Set createAndReadData.Data = new <xsl:value-of select= "$formName" /> CustomData
                Set createAndReadData.Result = createAndReadData.Data.createData(formData)
        End Function

        Public Function readData(key as String) as <xsl:value-of select= "$formName" /> CustomData
                    Dim tmpResult as new <xsl:value-of select= "$formName" /> CustomData
                    Call tmpResult.readData(key)
                    Set readData = tmpResult
            End Function
            Public Function updateData(key as String, formData as <xsl:value-of select= "$formName" /> InputData) as ResultCode
                    Dim curData as new <xsl:value-of select= "$formName" /> CustomData
                    Set updateData = curData.updateData(key, formData)        
            End Function
            Public Function updateAndReadData(key as String, formData as <xsl:value-of select= "$formName" /> InputData) as ExtendedResult
                    Set updateAndReadData = new ExtendedResult
                    Set updateAndReadData.Data = new <xsl:value-of select= "$formName" /> CustomData
                    Set updateAndReadData.Result = updateAndReadData.Data.updateData(key,formData)
            End Function
            Public Function deleteData(key as String) as ResultCode
                Dim curData as new <xsl:value-of select= "$formName" /> CustomData
                Set deleteData = curData.deleteData(key)
            End Function
        End Class
        Public Class ResultCode
            Public Value as Integer 'We use HTTP values
            Public DBid as String 'Database and document we try our luck with
            Public DocID as String
        End Class
        Public Class ExtendedResult
            Public Result as ResultCode
            Public Data as <xsl:value-of select= "$formName" /> CustomData
        End Class


Stress-Testing Lotus Domino applications with JMeter

QuickImage Category
There is a current discussion going on if and how to use webQueryOpen Agents. Jake has some excellent examples what you can achieve, while Michel didn't find performance differences between an embedded view and a WQO agent. I was actually not surprised, since a single user would show any difference. My suspicion is that the results will change once you put a server under load. So I was looking around for a tool that anybody could use to run a little test (so Rational Performance Tester was out of the picture).
The search was short. Apache offers a tool called JMeter. I don't know how it stacks up against other frameworks and/or tools, but it does everything we need for the little test. What is also quite nice about JMeter: there is a Wiki and plenty of online information how to use it. Once you download the binaries and expand them into a directory, you are good to go.
JMeter is started as Java application using the jmeter.bat file. You also have the option to run a command-line or even start remote instances. To work with JMeter successfully you need to understand a few concepts. JMeter is hierarchically organized. JMeter executes TestPlans, TestPlans contain ThreadGroups and ThreadGroups contain Tests. A Test consists as a minimum of one or more sampler. You also want to add a listener to show your result graphically or as table. You have a lot of options to configure and script, so it will take quite a while until you can take advantage of all the functionality. However the good news: it takes only a few minutes to configure a simple load test like "Hammer this URL with 1000 users".

These are the steps:
  1. Start JMeter
  2. Right click on Test Plan: add Thread Group. Specify the number of users (a.k.a threads) and the ramp-up time. This is the time how long JMeter will take to open all the treads. Also you can specify the number of loops to run.
  3. Right Click on the Tread Group and add a "HTTP Request HTTP Client" Sampler. Define the web server, port and relative URL (typically "/myNSF.nsf?OpenForm, ...?OpenDocument or ...?EditDocument").
  4. Right Click on Test Plan and add some listeners like the "Summary Result" or "Graph Result"
  5. Save the file (Ctrl+S)
  6. Run the plan (Ctrl+R)

That is all. You can look at the results in a tabular or graphic form, depending on the listeners you defined. Next step: run it against your favorite WQO.


Domino Application - ZEN style - Part 2

QuickImage Category
Continued from Part1.
Our application should start with a screen showing me only my documents and the option to switch to documents where I'm the approver. To do that I create a view based on the ($All) view and Just drag the Requestor column to the front. I also sort the view descending by date, so at some time older entries will disappear
Create a new view based on an existing one
The first column is categorized. A simple click will do. The column shows the content of the field requestor:

Categorized by requestor
We have the formula @Name([Abbreviate];Requestor) in the column. We could replace that with Requestor or leave it. It doesn't really matter. What matters is to use the same style later in the page when we embed the view. We will come back to the view in an instant to refine it. Now it is time to insert it into the page. I copy the content from the $$ViewTemplate for ($All) into a new page MyRequests. Doing that you will realize, that the field $$Viewbody doesn't get copied. A page can't hold fields.
To insert the view I select Create - Embedded Element - View:
Inserting a view

The Properties need adjustment. So I select to use HTML and allow for 50 entries and page width (on the second tab).
Selecting Embedded View Properties


SSL in Domino agents

QuickImage Category
This is a follow-up post to an older thread on Configuring SSL and reading from remote locations can be a headache... unless you stand on shoulders of giants. Here are the steps that worked for me. While they are designed for R7 upwards with a JVM 1.4++ they also will work in R6 with the optional SUN SSL packages (just read the older post for configuration).

Update: The class didn't process HTTPPost correctly, so I updated the code, changes in bold.

What do you need:
1) Apache Commons HTTP Client
2) Apache Logging library (and codecs)
3) EasySSL Classes ( EasySSL, EasyTrustManager)

Update (Thx John): Above links don't work anymore. EasyTrustManager can be found here and here. EasySLL here and here.
Traversing the broken URL above leads to the "readme pointing" to the new home called earth . A wildcard redirect would have been nice.

Once you have that a few simple lines of code will do. Note: you don't even need to configure SSL (but you SHOULD understand the security implications of NOT configuring it).
This post is also available on (or whatever it is called now).

Here is the class ...


Domino Application - ZEN style

QuickImage Category
Great work is under way to make Domino work as a first class citizen in the shiny new Ajax world. With this field covered I decided to look the other way: how minimalistic can a Domino application get and still work properly. I want to build a small approval application, just one approver level. It is mainly for documentation, not to execute a sophisticated mapped out workflow. Something that is a little bit more structured that sending eMail requests.
So let us assume I know HTTP/HTML very well. I don't know JavaScript (and I might want to use the app with devices that don't support JavaScript). I don't know Java or LotusScript. I have a limited understanding of @Formula (some basic stuff like @Command([FileSave]) or @Trim(@ThisValue) or @if(...)). I also have a friend who does the CSS for me. I have an excellent understanding however how Domino does forms, how Author and Reader fields work and how Domino URLs are constructed (which resemble some kind of REST API, so learning them with my background in HTTP was quite easy). How far can I go?

There are quite some steps involved. But they are less than it looks like.

Step 1: Create the database
Create a new database

Step 2: Set the ACL
Setting the access control
Since I want anybody to be able to use this little gem, I set default access to Author. Please --> In your environment follow best practices and set it to -No Access- and have an appropriate access group... but that is what the Administrator will do for you anyway. Just keep in mind to create the [Debug] role (You might not understand it but you liked the idea in that SnTT post).


The Eventful Notes Client -- Part 2

QuickImage Category
Slavek suggested in his comment to " The Eventful Notes Client" to use the Statusbar instead of popup messages. I replaced "@Prompt([OK];"Eventful Notes";" with "@Statusbar(", "MsgBox" with "Print" and "alert(" with "@window.status = (". Being naturally lazy I used Teamstudio Configurator for this. Here we go:

-- Open database into frameset --

07/09/2007 09:42:47 PM  Status Msg: Menu Page Title
07/09/2007 09:42:47 PM  Status Msg: Name of embedded Outline
07/09/2007 09:42:47 PM  Status Msg: $All Global Initialize
07/09/2007 09:42:47 PM  Status Msg: $All Initialize
07/09/2007 09:42:47 PM  Status Msg: $All QueryOpen
07/09/2007 09:42:48 PM  Status Msg: Initialize for database NotesEventExample
07/09/2007 09:42:48 PM  Status Msg: PostOpen for database NotesEventExample
07/09/2007 09:42:48 PM  Status Msg: $All PostOpen
07/09/2007 09:42:48 PM  Status Msg: $All OnSelect

-- Change to a different view --
07/09/2007 09:43:21 PM  Status Msg: $All QueryClose
07/09/2007 09:43:21 PM  Status Msg: $All Terminate
07/09/2007 09:43:21 PM  Status Msg: $All Global Terminate
07/09/2007 09:43:21 PM  Status Msg: SecondView Global Initialize
07/09/2007 09:43:21 PM  Status Msg: SecondView View Initialize
07/09/2007 09:43:21 PM  Status Msg: SecondView QueryOpen
07/09/2007 09:43:21 PM  Status Msg: SecondView PostOpen
07/09/2007 09:43:21 PM  Status Msg: SecondView OnSelect


The Eventful Notes Client

QuickImage Category
There have been many attempts to explain what Lotus Notes is. I'd like to add my own - from the perspective of software development:

Lotus Notes is an event engine that provides the developer with the capability to assemble an application from predefined events and react to them.

The languages available to react to events are: @Formula, LotusScript, JavaScript, Java, C/C++. The selection of available languages depends on the event you want to react on.
. Some examples of events are: Database opened, Default value of a field, Saving a document etc. There are MANY events and the knowledge how to use what event and when makes the Notes developer <g>
I assembled a small database with one outline, one page, one frameset, two views, one form, one subform and a few fields. I "mined" almost every event I could find with prompts to determine the sequence of the events firing (I didn't mine Hide-when formulas, the onKey..., onMouse... and onChange events).

After putting all the events into code I performed the following actions, starting with the empty database:
  1. Open the database (into the frameset)
  2. Switch from the View "$All" to the View "SecondView"
  3. Create one document "EventSample"
  4. Fill all fields with values
  5. Press F9 to recompute
  6. Change one more value
  7. Press Ctrl+S to save
  8. Close the document
  9. Repeat Step 3-8
  10. Select the View "$All"
  11. Delete the current document (Del key)
  12. Select the next document
  13. Select the deleted document
  14. Undelete the document
  15. Close the database
I didn't play with agents, drag & drop or web documents (that would be more to add). There are a lot of events firing, see for yourself:

Update: Also checkout Part 2 of the story.


Advanced Domino Web Development

QuickImage Category
I've been charged with delivering a workshop "Advanced Domino Web Development". Musing over the content of this workshop I realized, that you need just five things:
  1. Understanding
  2. Tools
  3. Libraries
  4. Reference
  5. Code
In detail, there is a little more stuff to consider. Since a picture is worth a thousand words, here the details as MindMap:
To create this map I used a trial version of Tony Buzan's very own software.


Lists in @Formula, especially @Member and @IsNotMember

QuickImage Category
Theo Heselmans has a nice post about how to figure out if a user has a specific role without using @IsMember or @IsNotMember. He doesn't like @IsMember and @IsNotMember. While that is a question of taste (or speed?) it is a good idea to fully understand how the various comparisions work. = or *= work differnt with lists than @is(Not)Member. One of the hallmarks of @Formula is the ability to deal with lists (multiple elements, similar to arrays) in the various formulas. In a lot of the @Functions you can use lists ("red":"blue":"green") to compute your results. If a Notes item (a field) contains multiple values they are treated as lists in formulas. The following table shows the effect of the various comparison results. I limit myself to matches, you can play with "greater" and "less" comparisons.
Let us asume we have these lists:

list1 := "red":"yellow":"green"
list2 := "yellow":"red"
list3 := "black":"blue"
list4 := "red":"white"
list5 := "yellow"

Comparison Effect Matches for @true Examples Result
= Compares every single value with the same position in the other list. If one list has less elements than the other list, then the shorter list is padded with the last value. In a list of one, that one element is compared with all others. If there is ONE match true is returned. one match, same position list1 = list2
list2 = list5
list5 = list1
*= Compares every member of the first with every member of the second list. If there is a single match true is returned. In difference to = the position doesn't play a role and there is no padding (needed) one match, any position list1 *= list2
list2 *= list5
! [Expression] Turns the expression into the opposite. Very clean. Should be always in front. For readability you might want to use brackets depends on expression !list1 = list2
!(list1 = list2)
!(list2 *= list5)
!= not equal operator. Compares element by element, padds a shorter list with the last element (same as =). Returns true if one match pair is different. In other words: only returns false if both list have the same members in the same sequence. one mismatch, any position list1 != list2 @true
*!= Compared every element from both lists and returns true if it finds one difference. It only will return false if both lists only contain one value (which can be there multiple times, that wouldn't matter) if one list has 2 different values "red":"red" *!= "red":"red":"red"
list1 *!= list2
@IsMember Checks that all members of the first list are available in the second list. The sequence doesn't matter. If there is one element in the first list, that is not in the second list it returns false. all, any position @isMember(list2;list1)
!@IsMember reversal of @isMember. If there is just one element in the first list that is not in the second list it returns true. one mismatch, any position !@isMember(list2;list1)
@isNotMember Checks that none of the elements in the first list is in the second list. If just one element is there, but not others it already returns false. So you can't replace !@IsMember with @IsNotMember for lists arguments. all mismatches, any position @isNotMember(list2;list1)
!@isNotMember Double negation to make your head spin <g>. Needs only a single match to return true. Equivalent to *= one match, any position @isNotMember(list2;list1)

Does your head spin now?


Hacking the Lotus Notes 8 UI

QuickImage Category
When you are bored with your normal splash screen One of the new exiting features of the Eclipse RCP based Notes 8 client are themes. Themes allow you to change many visual aspects of the Notes client UI. While serious developers will take the time and effort to closely work with the corporate design department to define a unified user experience in all corporate colors and fonts, the hackers among us will be in for a quickfix. Here you go:
  1. Locate the directory ... inside <Notes Program Dir>\framework\shared\eclipse\plugins\
  2. Create a backup of it, in case you screw it
  3. Edit splash.bmp, add a lotus flower
  4. edit themes/notes.css, look for mailtable>row>unread and change the color from Black to Red (yes it is back)
  5. Do other wierd things to your hearts desire
Your mileage might vary!


View hidden fields while debugging.

QuickImage Category
A short one today. A typical pattern for Notes and Domino forms is to have hidden fields at the beginning and end of a form. These hidden fields store lookups, IDs, relations and all sorts of things. As a unwritten convention these fields are marked red and hidden from reading and editing (basically all hidewhen options are checked). While developing or troubleshooting applications it is important that you can see the values of these fields. So instead of checking all hide options I use the following approach:
I create a hidden Computed For Display field of type number named "IsNotDebug". I put this formula in:
Then I use a formula in the hide when section: IsNotDebug. Is is kind of plain English: Hide when is not Debug.
By adding or removing the role [Debug] from your user-id you can switch on/off the display of the hidden fields. If you do a lot of local development, you might opt for a Environment variable instead of a role.
A final tip: To make things look better I often find hidden fields neatly put into tables with some explanations in the second column. This is not a good idea. Hide when formulas are executed separately for every paragraph in every cell. So you create a lot of extra computation. Best is to have the fields in a single paragraph (newlines with Shift-Control are ok). I use the pattern
Field : [the field] (Comment) | <-this could be just a bar or a new line.


Summer diet for your Notes Forms

QuickImage Category
The title I wanted to give this post was "Spring clean your notes forms", but then it did get delayed....
One hallmark of Notes applications is their durability. It is quite common to find applications being in use, that root back into R3 or R4. Quite a number of these applications still look like that too. That's the flip-site of not needing to rip-and-replace. Sometimes you also will find, that older forms load slower and behave odd at times. Before you start beautifying your applications, it is a good idea to do some spring cleaning. To do so I use Domino's DXL to extract a form in XML, filter it through a XSLT transformation and reimport that into the database.
The LotusScript code is very straight forward, has been well covered on developer works and the help file and looks like this:

  Set importer = session.CreateDXLImporter(stream, dbCopy)
 importer.ReplicaRequiredForReplaceOrUpdate = False
 importer.DesignImportOption = DXLIMPORTOPTION_CREATE
 Call importer.Process

The "meat" is in the transformation. The XSLT file consist of 4 principal sections that contain templates. In section 1 we find the start of the output including the wrapper and calls to various templates. In section 2 design elements are filtered out, in section 3 design elements are tweaked, finally in section 4 the remaining DXL is copied 1:1 to the resulting document; . The sequence of the sections is not relevant, since XSLT uses priorities not sequence to determine what element to apply. What are the elements you can or should spring clean:

  • In DXL I found NotesItems between the </body> and the </form> tag. Removing this from the form makes the form smaller, load faster without any change in behavior.
  • Font information is encoded inside the <run> tag. The <run> tag is similar to HTML's <span>. When you designed and re-designed forms over and over, there will be <run> tags that only contain a font change, but not actual any characters. Filtering that out lightens the form
  • Filter out all paragraph formats and/or fonts to be able to apply a new look & feel more easily
  • Convert access controlled sections into subforms (to work in the web)
  • Remove (all) LotusScript code and move it to libraries (see an upcoming SnTT post on "Classical Forms" about that).

Let us look at some code....


Loading your own plug-ins into Notes 8

QuickImage Category
Notes 8 is build using Lotus Expeditor which is anEclipse RCP application.
So theoretically you should be able to load your own plug-ins (which might not even be Notes related). Of course this is utterly not supported <g> (The supported way is to have the admin pushing them out to you). The functionality to point to an Eclipse Updates site is by default switched off. To switch it on locate this file:<notes install dir>\framework\rcp\plugin_customization.ini

Add one

and restart the Notes client. You will get access to the standard Eclipse Update site dialog here:

loading Plugins

It would be interesting to hear what plug-ins work (Azureus anyone)


Picking a Name for Your Domino Form

QuickImage Category
Domino Designer allows you to pick whatever form name you deem fit. It can contain letters, numbers, spaces, special characters. However when you develop for the web you might want to be a little more picky with what you use. Domino renders the form name into the HTML form name when showing the form in the browser. It also generates some JavaScript referencing this form. The form "Memo" is translated into "_Memo", "Response" is translated into "_Response". However "9. Market Survey" or "action.7" or "Bla Blub" all get translated into "_DominoForm". So if you have any logic that relies on the form name, you want to make sure the form name you pick can serve as a JavaScript variable name too.
If you retrofit an existing application, you might want to use form alias to achieve that. And for replacing that form name everywhere else there is Teamstudio Configurator.


Designing a CRUD Webservice for a Domino Form

QuickImage Category
Domino 7 allows you to create web services easily. You can start with a WSDL file and have Domino generate the Stub Classes for you or you start with the Class File and get a ready made WSDL file. Unfortunately both approaches require that you code quite a bit. Domino doesn't provide a ready baked set of SOAP web services out of the box. You do have a kind of REST web service using ?ReadViewEntries, but that is read ony.
A very likely candidate for a web service is the creation, update and deletion of documents with a given form (commonly refered as CRUD). Domino allows you to store any field in a document, even if it hasn't been defined anywhere making it very flexible and so difficult to understand if you are used to an RDBMS.
It would be fairly easy to design a web service (after Julian has enlightened you here, here and here), that allows an arbitrary number of field value pairs to be stored in a document. However this would defeat the principle of clear cut interfaces. More appropriate is to limit the possible input to the input fields defined in a form and giving back all fields including the computed ones. How to get there?
  1. Export a domino form as DXL
  2. Transform the DXL into WSDL using XSLT
  3. Import the WSDL into a Domino Web Service
Creating the WSDL file first instead of Webservice code has a number of reasons. First it follows the "contract first" programming patterns, where the WSDL file is the "contract" and code has to stick to that interface, secondly it allows you to decide whether to use LotusScript or Java to implement the service. Last not least: Currently you can't easily export/import a web service design element in DXL.


doc.ComputeWithForm Revisited

QuickImage Category

doc.ComputeWithForm seems to be confusing for a lot of developers. The Designer Help states: Validates a document by executing the default value, translation, and validation formulas, if any are defined in the document form. No word about computed Fields. In Notes 4.6 they didn't work, which still can be a problem for some. Starting with R5 they compute even if sometimes it doesn't look like. Also some challenge and an odd behavior remains.

In a nutshell these are the facts you need to keep in mind when using doc.ComputeWithForm:

  • For editable fields the Default, Input Translation and Input validation fields are executed. The Default formula is executed when the field does not exist in the document. Typically that is only the case for a new document, but can be true also when the item was removed through code or the field was added to a form later
  • For computed fields the formula is executed
  • For computed when composed fields the formula is executed if the item does not exist in the document (same rules apply as for the default formula)
  • Computed for display formulas are NOT executed. Therefore any formula depending on a field value of a computed for display field will take the value as empty thus producing unexpected results
  • Side effects in Formulas like FIELD xxx := or @SetProfileField are executed. @Commands are not. Also @DeleteDocument has no effect.
  • Self referencing fields are problematic. A Formula like @if(@thisValue="";1;@ThisValue+1) results in values 2-4-6-8 etc. You might change your formula to this: @If(@ThisValue="";1;@IsDocBeingEdited;@ThisValue+1;@ThisValue) and use LotusScript to increment your counter manually
  • @IsDocBeingEdited returns @False, so you can influence if a formula is executed in the UI or with ComputeWithForm. A standard blocker for fields would be: @if(@IsDocBeingEdited;"";@Return(@ThisValue)).
    Memento: You can't use that for display only fields since they MUST compute when you open a form, while all other formulas don't compute in read mode (OK: Defaults for new fields compute [but are not saved] as does the Window title)
  • You might want to use a special form to use with ComputeWithForm. There you can do all sorts of data massage without writing lengthy LotusScript code. The code snippet would look like this:
    oldForm = doc.Form(0) doc.Form = "SpecialComputeForm"
    Call doc.ComputeWithForm(false,false)
    doc.Form = oldForm
    This also allows you to ban evaluate() from your script code thus keeping LotusScript and @Formula strictly separate
  • None of the Form events are fired. So no LotusScript, JavaScript or @Fomula in Form events (QueryOpen, QuerySave etc.) execute. Is is @Formulas in fields only
  • When your field returns more than 15k of data the summary flag is not set, so you can't use it in a view column. This is a bug that has been reported as SPR# TNIT5EYJ9N. Didn't find it in any release notes yet.

That pretty much sums it up.


Force Domino directory replication to all servers

QuickImage Category
When cleaning up a domain and reconfiguring servers, domains and routes one critical success factor is the distribution of changes to all servers. In a well maintained domain the replicator task will take care of that. However that might take hours and if the domain you are working on is well maintained, there is little need for a clean-up in the first place .
To be independent from replication schedules and other quirks I got an agent at hand that helps me there. You can copy that agent to your Domino Directory and take a few easy steps:
  1. Create a local replica of the Domino directory
  2. Copy the agent into it
  3. Make sure you have a Notes client that can reach ALL servers
  4. Make sure you have enough otherwise to do (or run this agent from a VMWare)
The code is pretty simple.


What are you hiding from me?

QuickImage Category

By now you should be able to extract your database design as DXL. This opens a series of possibilities to run reports against your database to gain more insights into its design. Since Notes is around for so many years you will face the situation to inherit a database you haven't written. To understand its logic you need so see how Hide-when formulas have been used in the all over design. The following XSLT allows you to do this extraction.

Use it at a starting point and play with the layout. Ideas you could look at: Pack the whole report into a single table and load it into your favorite spreadsheet for sorting (a fancy Ajax lib could do that to).

<xsl:stylesheetversion="1.0" >
<xsl:outputmethod="xml"version="1.0"encoding="UTF-8"indent="yes" />
<xsl:templatematch="/" >
<html >
<head >
<title />
<styletype="text/css" />
<body >
<table >
<xsl:apply-templatesselect="//d:form" />
<xsl:apply-templatesselect="//d:subform" />
<xsl:templatematch="d:form" >
<tr >
<thcolspan="3" >
<h1 > Form
<xsl:value-ofselect="@name" />
<tr >
<th />
<th />
<th />
<xsl:apply-templatesselect="//d:pardef[d:code/@event='hidewhen']" />
<xsl:templatematch="d:subform" >
<tr >
<thcolspan="3" >
<h1 > Subform
<xsl:value-ofselect="@name" />
<tr >
<th />
<th />
<th />
<xsl:apply-templatesselect="//d:pardef[d:code/@event='hidewhen']" />
<xsl:templatematch="d:pardef" >
<tr >
<tdbgcolor="#EEEEFF" >
<xsl:value-ofselect="@id" />
<tdbgcolor="#EEFFEE" >
<xsl:value-ofselect="d:code/@enabled" />
<tdbgcolor="#FFEEEE" >
<xsl:value-ofselect="d:code[@event='hidewhen']/d:formula" />


Getting hold of your replication formulas (SnTT Sunday edition)

QuickImage Category

Since R6 we are able to manipulate replication formulas using Lotus Script. You simply retrieve the entry and update or delete it. There is only one problem: If you don't know what custom replication pairs are in your database you can't do anything about it. Since "wild" custom replication formulas are a nightmare to track down I created the following script and XSLT stylesheet to extract this information using DXL. My script simply creates documents that document the settings. It will be easy to extend the script to include deletion of the entries. Or you could modifiy it to track replication formulas in multiple databases.

As usual: your mileage might vary!

Option Public Option Declare Sub Initialize Dim db As NotesDatabase Dim notecol As NotesNoteCollection Dim exporter As NotesDXLExporter Dim importer As NotesDXLImporter Dim transformer As NotesXSLTransformer Dim result As String Dim stream As NotesStream Dim s As New NotesSession 'Setup envionment Set db = s.CurrentDatabase 'Create the collection Set notecol = db.CreateNoteCollection(False) Call notecol.SelectAllAdminNotes(True) 'ACL, Replication Formulas Call notecol.BuildCollection Print "Build the collection ..." 'Get the transformation stylesheet Set stream = getRichTextSource("DocumentCustomReplicationFormulas") 'Create the import/export/transformer Set exporter = s.CreateDXLExporter Set transformer = s.CreateXSLTransformer Set importer = s.CreateDXLImporter 'Now we need to wire the exporter to the transformer to the importer Call exporter.SetInput(notecol) Call exporter.SetOutput(transformer) Call transformer.SetStylesheet(stream) Call transformer.setOutput(importer) Call importer.setOutput(db) 'Set the parameters for processing exporter.ExitOnFirstFatalError = False importer.ReplaceDBProperties = False importer.ReplicaRequiredForReplaceOrUpdate = False importer.ACLImportOption = DXLIMPORTOPTION_IGNORE importer.DesignImportOption = DXLIMPORTOPTION_IGNORE importer.DocumentImportOption = DXLIMPORTOPTION_CREATE Print "Processing" On Error Resume Next Call exporter.Process Print "Backup completed" result = exporter.log Call stream.Close Beep Msgbox result, 0, Cstr(importer.importednotecount) & " Documents created" End Sub Function getRichTextSource(SourceName As String) As NotesStream 'Retrieves a RichTextitem from the XSLTDefinitions view Dim db As NotesDatabase Dim doc As NotesDocument Dim v As NotesView Dim tmpStream As NotesStream Dim rt As NotesRichTextItem Dim s As New NotesSession Set db = s.CurrentDatabase Set v = db.GetView("XSLTDefinitions") Set doc = v.GetDocumentByKey(SourceName,True) If doc Is Nothing Then Msgbox "XSLT Definition " & SourceName & "not found" Exit Function End If Set rt = doc.GetFirstItem("Body") Set tmpStream = s.CreateStream tmpStream.WriteText(rt.GetUnformattedText) tmpStream.Position = 0 Set getRichTextSource = tmpStream End Function
This LotusScript was converted to HTML using the ls2html routine,
provided by Julian Robichaux at


Hierarchical documents - a Domino development pattern

QuickImage Category
Notes provides us with a build in hierarchy for documents. You have documents, responses and responses to responses. This looks like sufficient to cover most hierarchical requirements. Nevertheless it often is not the right fit. For example: would you consider the head of a department a main document and her staff response documents? Rather not. Especially moving a response document around (when you switch departments) is a pain in the neck. Enter the Hierarchical document pattern. I will describe the principle, you can adopt it to your specific needs.

Step1: Compute your document ID. Use a field DocID and the formula @Text(@DocumentUniqueId). The field can be computed when composed or computed. If you opt for computed when composed you need to take care of the Copy & Paste operations (Take care does not mean: suppress it, life is hard enough for your users).

Step 2: Create a field Parent[xxx]ID. Replace the [xxx] with the logical relationship e.g. ParentManagerID, ParentRequestID, ParentDocID. You can have as many as you want (thus a difference to the mono-inheritance of the build-in hierarchy. You need to populate this fields according to your business logic. If you only have ParentDocID you could put it above the DocID, switch on "Inherit values" in the form and use DocID as formula for a computed when composed field. If you have multiple fields your mileage might vary.

Step 3: Since it is often necessary to get values up the hierarchy we create additional fields. E.g. you have an appraisal form (HR stuff), where you want that your manager, your dotted-line manager and their bosses and their bosses bosses can read how miserable you felt much you appreciated the ever changing challenges. There you have two options: pull the values on the fly when saving the document using some lengthy script code - or store the values directly in the document. You need to consider: does my hierarchy change often (every minute) or rather rarely (twice a month). Our approach fits well for the later.
For every Parent[xxx]ID field you create one field Parent[xxx]IDtree. This field will have the following formula (adjust as needed) @if(ParentDocID="";DocID;@GetDocField(ParentDocID;"ParentDocIDtree"):DocID). Some error handling needs to be added and the field needs to be multi-value of course. Now you have all DocIDs of the tree above the document in one field. This comes VERY handy in a view, when you want to get all documents that "hang below" a specific one regardless of depth of the hierarchy. It also allows (but we might avoid it) that you loop through the various document using @GetDocField to pull values.

Step 4: Extend the pattern. Besides storing the IDs you could store also normal fields like Manager or Subject in a redundant hierarchical fashion (data normalizers will shoot me for that "high-performance" approach. Typical examples: Show me all my managers, show me the project structure etc. Instead of pulling the values at view time using a @For loop and @Getdocfield for a computed for display field, we use computed fields:
e.g. SubjectDocTree, FullNameManagerTree, CompletionDateProjectTree etc. The formula for the computed field would be similar to above: @if(ParentDocID="";Subject;@GetDocField(ParentDocID;"SubjectDocTree"):Subject), repeat this for all fields you need.

Why are we doing that this way? Very simple: @Fomulas perform typically faster than LotusScript code. Also the pattern is easily extendible. Need another hierarchy: just add the fields. The only caveat: when you change a value "up in the tree" you need to cascade down the branches to the leaves. However after having identified the documents (Step 2 and the matching view) a simple doc.computewithform will do, so you Script Code will be generic.

I know your next question: Where is the code? I'll post the LotusScript for the updates and a sample database on an upcoming Show-and-tell Thursday soon.


What is in a server name?

QuickImage Category
A short, sweet and controversial tip today: how to name your servers. When Notes entered the market, the predominant network protocols were NetBui/Netbios and IPX/SPX. Beside that Apple Talk, Banyan Vines and some inferior protocol called TCP/IP were supported. Naturally that Notes server discovery ran primarily via Netbios. Fast forward to today. Hands up: who has anything else than TCP/IP in production to link Notes clients to Domino servers? No one - good.
One of biggest support issues (by number of cases), especially when you have travelling folks, is to get the clients connected to the server. We all fiddle with pushing connection documents, keeping them updated etc. On the same time we already run a proper naming service for our servers: DNS. When a client tries to connect to a server it uses the server name and shouts into the network: "Are you there?". The you in this case is the server's common name. In a properly configured Intranet DNS or Netbios will resolve "myserver" to (or whatever your IP might be). In the "wild" that won't work, so you have to resort to connection documents. Unless of course you didn't name your server myserver/servers/myorg but
We tried that in a number of cases: Naming your server after its full qualified DNS entry will prevent most "server not found" issues.
Of course your mileage might vary: If your internal DNS is not well administrated or subject to constant change you will open just another can of worms.
So when you setup the next server(s) have a long chat with your DNS guys first.


Bring order to your group names

QuickImage Category
I'm not and admin buff, so I try to keep time spending in administration short. Management of groups for security purposes can be a drag on your administrative time. Since Notes basically let you do whatever you want, I find a lot of organic grown group structures. These are hard to maintain and often confusing. So over the years we refined a group structure, that is easy to understand as well as processable through code. The groups consist of three semantic elements connected by a dot:

&domainname dot metagroup dot purpose

Some examples:



&domainname stands for your domain name or a sensible abbreviation of it.
metagroup can be either sys for system group (e.g. server access) or apps for applications
purpose can be composed out of one or more sub-elements. In the system metagroup it would be either an access role .createdatabase or a servername with an access role x34za.denyaccess. In the application metagroup it always will be the name of the application and the application function. . .po.manager etc. Depending on your organization's size you might want to tweak the structure rules for the best fit (<advertisement>if in doubt ask TAO for consulting services</advertisement>). Keep in mind to stick to the maximum length of 64 characters

The structure has a number of advantages:
  1. Using the # in front makes sure the groups are not mixed with user names or mail-group names in the address dialogue
  2. Having the domain (or a sensible abbreviation of it) in front eases the maintenance of ACL of databases and templates that get replicated across domains (e.g. from system integrator to customer)
  3. The structure allows for easy parsing. E.g. a server document must only contain groups that have .sys. in the second position. A ACL can only contain groups that carry the application name in third position. We typically use a registration database where all databases are configured with their "application name" to a acl scanner can find inconsistencies
  4. We typically add a group with designer rights to every database. The group is empty. When there is a need for troubleshooting a workflow process will add the developer to this group for a specified time frame. This way we avoid the two pitfalls in security: giving developers full admin rights or giving them permanent developer access to production databases
  5. Since group names for applications are well defined you can use the application registration database for some kind of self service, where the business owner of a database grants access to a user by filling in a form and a background agent is updating the Domino directory. You can have as much audit trail and workflow as your auditors need to see.
  6. ACL structures can be fine granular and well defined at the same time. E.g. you can enforce (and scan) rules like: If it is not a mail file, there must not be a person entry. Every database needs a support group with designer access (the structure would determine the exact name). etc.
Update: IBM recommends not to use #, so I replaced it above with &


Visualizing a Date Range

QuickImage Category
After last weeks flight level 500 contribution I'd like to dive right into some code today. The task at hand: visualize a date range in a calendar like display with weekends and special days (e.g. Holidays) marked differently. A leave application or a travel approval system would be typical examples where that comes in handy. Do not use any Lotus Script or Java or JavaScript. OK and no C/C++ either. So the tools at hand are @Formula, HTML and CSS:

Let us assume, that the two fields holding the start and the end date are: LeaveStart and LeaveEnd.
Create one field, Type date, name Datelist, multivalue enabled, computed, Formula: @Explode(@TextToTime(@Text(@Date(LeaveStart))+"-"+@Text(@Date(LeaveEnd))))
Another field dHolidays contains all the special days as Text list.

The surrounding code for the display looks like this:
 <table width=100% border=0 cellpadding=2 cellspacing=2 bgcolor=#DDDDFF class=leavecal>
    <tr><th colspan=7>Overview days for this request</th></tr><tr>

The code in the computed for display text field [dLeaveCal] looks like this:
REM {Hodidays in Red; Weekends in Green; Leave in Blue};
tmpColWeekend := "weekend";
tmpNumHolidays := @If(@Elements(dHolidays)<1;1;@Elements(dHolidays));
tmpColHoliday := @Trim(@Explode(@Repeat("holiday!";tmpNumHolidays);"!"));
tmpColLeave := "leaveday";
tmpdays := @TextToTime(datelist);
tmpWeekdays := @Text(@Weekday(tmpdays));
tmpFirstWeekday := @TextToNumber(@Subset(tmpWeekdays;1));
leadString := @If(tmpFirstWeekday=2;"";tmpFirstWeekday=1;"<td colspan=6>&nbsp;</td>";"<td colspan="+@Text(tmpFirstWeekday-2)+">&nbsp;</td>");
tmpWdNames := @Replace(tmpWeekdays;"1":"2":"3":"4":"5":"6":"7";"Sun":"Mon":"Tue":"Wed":"Thu":"Fri":"Sat");
tmpbgcolstep1 := @Replace(datelist;dHolidays;tmpColHoliday);
tmpbgcolstep2 := @Replace(tmpbgcolstep1;datelist;tmpWeekdays);
tmpbgcolstep3 :=@Replace(tmpbgcolstep2;"1":"2":"3":"4":"5":"6":"7";tmpColWeekend:tmpColLeave:tmpColLeave:tmpColLeave:tmpColLeave:tmpColLeave:tmpColWeekend);
tmplinkebreak :=@Replace(tmpWeekdays;"1":"2":"3":"4":"5":"6":"7";"</tr><tr>":"");
leadString+@Implode("<td class="+tmpbgcolstep3+">"+tmpWdNames+"<br />"+@Text(datelist)+"</td>"+tmplinkebreak;" ")+
"</tr><tr><td  colspan=3 class="+tmpColLeave+">Leave days</td><td  colspan=2 class="+@Subset(tmpColHoliday;1)+">Public Holiday (if any)</td><td colspan=2 class="+tmpColWeekend+">Weekend</td>"

You need to figure out some CSS for the classes:
.holiday { font-size : xx-small;
               background-color: #FFCCCC;
               border : 1px solid #FF0000;
.leaveday { font-size : xx-small;
                  background-color: #CCCCFF;
                  border : 1px solid #0000FF;
.weekend { font-size : xx-small
                   background-color: #CCFFCC;
                   border : 1px solid #00FF00; 

The result will look like this (look Ma, no JS):
Display of a date range in HTML


What are the strongest aspects of Lotus Domino?

QuickImage Category
I was asked recently to sum up the strongest points about Lotus Domino. This is what I came up with:

Lotus Domino is an integrated communication platform, that features email, instant messaging, collaboration and an application development platform. Its strongest points are self-containment, robustness, scalability, freedom of platform choice, security, extensibility, low cost of ownership and IBM's commitment.

Self-containment: Domino is not dependent on external applications other than the operating system and the network. While MS Exchange depends on Active Directory and Novell Groupwise on Novell's eDirectory, Domino provides it's own directory that also can serve as an LDAP directory for other applications. Due to the lack of external dependencies Domino servers can be moved from one platform to another with ease and low effort. More important: Running Domino does not require a bundle of different products and product skills. When comparing Domino's capabilities with Microsoft's offerings, you would need multiple Microsoft products to cover all Domino functionality: MS Active Directory, MS Exchange, MS Sharepoint, MS Life Connection Server and MS SQL.

Robustness: Domino uses individual databases for every mail user. The databases can synchronized ("replicated" in Notes lingo) between multiple servers and clients. Therefore Domino is very resilient against failures on one server or in one database. Combined with the active clustering and the choice of platforms a Domino system can be build with practical no perceivable downtime.

Scalability and platform choice: Domino is available on multiple platforms from Windows, Linux, Unix to z/OS. It provides cross-platform clustering and load balancing. An investment into a Domino cluster for high availability is paid back with better response times due to the load balancing functionality. IBM has provided benchmarks on where a single iSeries server supports 100,000 concurrent mail users.  However the most interesting aspect of the multi-platform availability is the possibility running Domino servers on different operating systems than the clients, thus providing a natural infection barrier for viruses and maleware. The platform choice doesn't stop with the servers. eMail and data stored in Domino can be accessed by a wealth of clients and protocols, giving the customer the freedom to pick the desktop (email) application of their choice: Lotus Notes Clients, IBM Workplace, MS Outlook, POP3/IMAP4 clients like Mozilla Thunderbird or web mail via a browser.

Security: Domino's granular access control to email, calendar and Domino applications is unique in the industry. Furthermore Domino provides digital signatures and encryption both in Domino's propriety format as well as the industry standard X509. Domino serves as the corporate PKI infrastructure, eliminating the need to invest into and introduce another technology for that. But there is more: Domino enables by design the lock-down of sensitive information, so even the system administrators could not retrieve it, so the data owners can be assured of data confidentiality.

Extensibility: Domino's mail design is completely open and can be customized to specific corporate needs. The OpenNTF open source community provides an free alternative to IBM's own design with enhanced functions putting pressure on IBM to constantly innovate. Besides eMail Domino features a platform for collaborative applications like discussion boards, blogs, wikis, CMS, CRM and a huge selection of open source or commercial applications. The latest version of Domino makes all this available as web services, in fact turning Domino into a pilar of a SOA strategy. Since the application share the platform with messaging they integrate well with email and allow the creation of an corporate unified information backbone.

Low cost of ownership and IBM's commitment: a well configured Domino platform takes advantage of the policy based administration, that minimizes administrative efforts through a set of comprehensive rules. In the latest release IBM also added a comprehensive domain monitoring capability. All this frees up valuable time for administrators to manage their environment more proactive. IBM is the single vendor with the most detailed roadmap for it's messaging platform. The current version Domino 7 has been released a few month ago, nevertheless IBM has outlined plans for Domino version 8 and 9. This assures users of the Domino platform, that they are not in for any rip and replace upgrade any time soon.


Websphere, Domino and Java insights

QuickImage Category

When you are a LotusScript buff and make your first baby steps in Java and Domino, there are a few things quite different. Working on DominoWebDAV I learned a few lessons I'd like to share:  
  • Get yourself a copy of this. It is still the best transition from your sound knowledge of Lotus Script Domino Objects to Java Domino Objects  
  • Domino comes with an entitlement of a Websphere server, so this can be a good starting point. One caveat: Websphere likes to have EAR files for application deployments. When you start off with a simple servlet or JSP, it is most likely, that you just have a WAR file. When deploying a war file Websphere wants to know the application context (Tomcat on the other hand simply uses the WAR file name as context, easier but less flexible). When you enter "myapp" it won't work. You have to enter "/myapp/"  
  • When linking Websphere to Domino for authentication (a topic for another Thursday). Websphere by default activates J2EE security. This is a good thing for applications deployed in production, but a nightmare for a Java novice. So when you tick the security icon, untick the J2EE security icon below (in your development environment only of course)  
  • Before you get started with servlets, you might want to test your Java skills on an agent first. While the build in IDE is OK, editing Java in Eclipse is much more fun. To edit Java agents there, you can use the Domiclipse plugin.  
  • Not sure about Java itself? There is an excellent Java learning IDE called BlueJ. And there are the outstanding Head First books about Java, Servlets, EJB and Patterns.  
  • There are also plenty of places where you can get Java help and insights. My favourite places are Javaranch, Java Coffee Break and Jakarta.  
  • When looking at a specific problem to solve, check the Jakarta Commons before you start coding. There a lot of problems have been solved for you. For example there is a complete HTTP client, that handles authentication, cookies, html parsing or encryption for you.  
  • When you are done with a Domino object in Java, call its recycle() method. Otherwise you create a army of Zombie C objects that eat all your memory. Of course you need to be careful of the sequence. If you recycle a database object, you can't access the documents in there any more, even if you have a Java object pointing to one  
  • Have a look at session.resolve(url). This session method allows you to take a notes URL (notes://....) and resolve it directly to a database, view or document. That's a very nice shortcut thru the object hierarchy.


This site is in no way affiliated, endorsed, sanctioned, supported, nor enlightened by Lotus Software nor IBM Corporation. I may be an employee, but the opinions, theories, facts, etc. presented here are my own and are in now way given in any official capacity. In short, these are my words and this is my site, not IBM's - and don't even begin to think otherwise. (Disclaimer shamelessly plugged from Rocky Oliver)
© 2003 - 2017 Stephan H. Wissel - some rights reserved as listed here: Creative Commons License
Unless otherwise labeled by its originating author, the content found on this site is made available under the terms of an Attribution/NonCommercial/ShareAlike Creative Commons License, with the exception that no rights are granted -- since they are not mine to grant -- in any logo, graphic design, trademarks or trade names of any type. Code samples and code downloads on this site are, unless otherwise labeled, made available under an Apache 2.0 license. Other license models are available on written request and written confirmation.