Domino Upgrade

VersionSupport end
Upgrade to 9.x now!
(see the full Lotus lifcyle) To make your upgrade a success use the Upgrade Cheat Sheet.
Contemplating to replace Notes? You have to read this! (also available on Slideshare)


Other languages on request.


Useful Tools

Get Firefox
Use OpenDNS
The support for Windows XP has come to an end . Time to consider an alternative to move on.

About Me

I am the "IBM Collaboration & Productivity Advisor" for IBM Asia Pacific. I'm based in Singapore.
Reach out to me via:
Follow notessensei on Twitter
Amazon Store
Amazon Kindle
NotesSensei's Spreadshirt shop
profile for stwissel on Stack Exchange, a network of free, community-driven Q&A sites


Agile Outsourcing


The problem

Outsourcing is a "special" animal. Typically the idea is to save cost by letting a service provider execute work. The saving cost happens because the service provider is supposed to be able to perform this actions at scale. Increasingly outsourcing deals are motivated by a skill squeeze. So instead of maintaining in-house expertise, rely on the vendors to keep the light on.
This is where the trouble starts. Negotiations on outsourcing contracts revolves around price (drive it down) and the SLA (add as many 9 behind the comma as possible). The single outcome of such contracts is extreme risk aversion. For illustration here is the impact of SLA levels :
SLATotal annual Downtime
98%7 days, 6h, 12min
99%3 days, 15h, 36min
99.9%8h, 45min, 36sec
99.99%52min, 34sec
99.999%5min, 16sec
99.9999%32 sec
The fixation on SLA has a clinical term: OCD. Any change is considered as dangerous as someone holding a knife at your throat and asked you to dance.
Looking at some of the figures (I can't share) I would claim that short of highly parallel (and expensive) transaction system anything above 99.9% is wishful thinking. That doesn't deter negotiators to aim for a "look how many 9th I got" trophy. (While the Buddha reminds us: one cause of suffering is to close your eyes to reality). Expensive SLA violation clauses let outsourcers freeze all system, since any change (read: patches, upgrades, enhancements) is rightly identified as grave risk (to the profits).
So all sorts of processes and checks get implemented to vet any change request and in practise avoid them.
This usually leads to a lot of bureaucracy and glacial progress. As a result discontent, especially on the use of non-transactional system grows: Stuff like outdated eMail clients, lack of mobile support etc. etc.
The relation between oursourcer and oursourcee grows, inevitably, challenging over time. Does it have to be that way?

Some fresh thinking

Just move to cloud might not be the answer (or everybody would be there, it's such a nice place). So what could be done? Here are some thoughts:
  • Kiss goodby the wholesale SLA agreement. Classify systems based on business impact. A booking system for an airline surly deserves three nines (I doubt that four would make sense), while a website can live with one nine (as long as it distributed over the year)
  • Take a page from the PaaS offerings: each element of the environment has a measurement and a price. So the outsourcing provider can offer ala card services instead of freezing the environment. A catalogue entry could be "Running a current and patched DB/2", another entry could be "Run a legacy IIS, version xx"
  • Customer and provider would agree on an annual catalogue value, based on the starting environment and any known plan at the time
  • The catalogue would allow to decommission unneeded system and replace them with successors without much hassle (out with PHP, in with node.js)
  • Automate, Automate, Automate - An outsourcer without DevOps (Puppet, Chef and tight monitoring) didn't get the 2017 message
  • Transparency: Running systems over processes, Customer satisfaction over unrealistic SLA, Automation over documentation (I hear the howling), Repeatable procedures over locked down environments
What do you think?


Lessons from Project OrangeBox


Project OrangeBox, the Solr free search component, was launched after the experiments with Java8, Vert.x and RxJava in TPTSNBN concluded. With a certain promise we were working on a tight dead line and burned way more midnight oil than I would have wished for.

Anyway, I had the opportunity to work with great engineers and we shipped as promised. There are quite some lesson to be learned, here we go (in no specific order):

  • Co-locate
    The Verse team is spread over the globe: USA, Ireland, Belarus, China, Singapore and The Philippines. While this allows for 24x7 development, it also poses a substantial communications overhead. We made the largest jumps in both features and quality during and after co-location periods. So any sizable project needs to start and be interluded with co-location time. Team velocity will greatly benefit
  • No holy cows
    For VoP we slaughtered the "Verse is Solr" cow. That saved the Domino installed base a lot of investments in time and resources. Each project has its "holy cows": Interfaces, tool sets, "invaluable, immutable code", development pattern, processes. You have to be ready to challenge them by keeping a razor sharp focus on customer success. Watch out for Prima donnas (see next item)
  • No Prima Donnas
    As software engineers we are very prone to perceive our view of the world as the (only) correct one. After all we create some of it. In a team setting that's deadly. Self reflection and empathy are as critical to the success as technical skills and perseverance.
    Robert Sutton, one of my favourite Harvard authors, expresses that a little bolder.
    In short: A team can only be bigger than the sum of its members, when the individuals see themselves as members and are not hovering above it
  • Unit test are overrated
    I hear howling, read on. Like "A journey of a thousand miles begins with a single step" you can say: "Great software starts with a Unit Test". Begins, not: "Great software consists of Unit Tests". A great journey that only has steps ends tragically in death by starvation, thirst or evil events.
    Same applies to your test regime: You start with Unit tests, write code, pass it on to the next level of tests (module, integration, UI) etc. So unit tests are a "conditio sine qua non" in your test regime, but in no way sufficient
  • Test pyramid and good test data
    Starting with unit tests (we used JUnit and EasyMock), you move up to module tests. There, still written in JUnit, you check the correctness of higher combinations. Then you have API test for your REST API. Here we used Postman and its node.js integration Newman.
    Finally you need to test end-to-end including the UI. For that Selenium rules supreme. Why not e.g. PhantomJS? Selenium drives real browsers, so you can (automate) test against all rendering engines, which, as a fact of the matter, behave unsurprisingly different.
    One super critical insight: You need a good set of diverse test data, both expected and unexpected inputs in conjunction with the expected outputs. A good set of fringe data makes sure you catch challenges and border conditions early.
    Last not least: Have performance tests from the very beginning. We used both Rational Performance Tester (RPT) and Apache JMeter. RPT gives you a head start in creating tests, while JMeter's XML file based test cases were easier to share and manipulate. When you are short of test infrastructure (quite often the client running tests is the limiting factor) you can offload JMeter tests to Blazemeter or
  • Measure, measure, measure
    You need to know where your code is spending its time in. We employed a number of tools to get good metrics. You want to look at averages, min, max and standard deviations of your calls. David even wrote a specific plugin to see the native calls (note open, design note open) or Java code would produce (This will result in future Java API improvements). The two main tools (besides watching the network tab in the browser) were New Relic with deep instrumentation into our Domino server's JVM and JAMon collecting live statistics (which you can query on the Domino console using show stats vop. Making measurements a default practise during code development makes your life much easier later on
  • No Work without ticket
    That might be the hardest part to implement. Any code item needs to be related to a ticket. For the search component we used Github Enterprise, pimped up with Zenhub.
    A very typical flow is: someone (analyst, scrum master, offering manager, project architect, etc.) "owns" the ticket system and tickets flow down. Sounds awfully like waterfall (and it is). Breaking free from this and turn to "the tickets are created by the developers and are the actual standup" greatly improves team velocity. This doesn't preclude creation of tickets by others, to fill a backlog or create and extend user stories. Look for the middle ground.
    We managed to get Github tickets to work with Eclipse which made it easy to create tickets on the fly. Once you are there you can visualize progress using Burn charts
  • Agile
    "Standup meeting every morning 9:30, no exception" - isn't agile. That's process strangling velocity. Spend some time to rediscover the heart of Agile and implement that.
    Typical traps to avoid:
    • use ticket (closings) as (sole) metric. It only discourages the us of the ticket system as ongoing documentation
    • insist on process over collaboration. A "standup meeting" could be just a Slack channel for most of the time. No need to spend time every day in a call or meeting, especially when the team is large
    • Code is final - it's not. Refactoring is part of the package - including refactoring the various tests
    • Isolate teams. If there isn't a lively exchange of progress, you end up with silo code. Requires mutual team respect
    • Track "percent complete". This lives on the fallacy of 100% being a fixed value. Track units of work left to do (and expect that to eventually rise during the project)
    • One way flow. If the people actually writing code can't participate in shaping user stories or create tickets, you have waterfall in disguise
    • Narrow user definitions and stories: I always cringe at the Scrum template for user stories: "As a ... I want ... because/in order to ...". There are two fallacies: first it presumes a linear, single actor flow, secondly it only describes what happens if it works. While it's a good start, adopting more complete use cases (the big brother of user stories) helps to keep the stories consistent. Go learn about Writing Effective Use Cases. The agile twist: A use case doesn't have to be complete to get started. Adjust and complete it as it evolves. Another little trap: The "users" in the user stories need to include: infrastructure managers, db admins, code maintainer, software testers etc. Basically anybody touching the app, not just final (business) users
    • No code reviews: looking at each other's code increases coherence in code style and accellerates bug squashing. Don't fall for the trap: productivity drops by 50% if 2 people stare at one screen - just the opposite happens
  • Big screens
    While co-located we squatted in booked conference rooms with whiteboard, postit walls and projectors. Some of the most efficient working hours were two or three pairs of eyes walking through code, both in source and debug mode. During quiet time (developers need ample of that. The Bose solution isn't enough), 27" or more inches of screen real estate boost productivity. At my home office I run a dual screen setup with more than one machine running (However, I have to admit: some of the code was written perched into a cattle class seat travelling between Singapore and the US)
  • Automate
    We used both Jenkins and Travis as our automation platform. The project used Maven to keep the project together. While Maven is a harsh mistress spending time to provide all automation targets proved invaluable.
    You have to configure your test regime carefully. Unit test should not only run on the CI environment, but on a developers workstation - for the code (s)he touches. A full integration test for VoP on the other hand, runs for a couple of hours. That's the task better left to the CI environment. Our Maven tasks included generating the (internal) website and the JavaDoc.
    Lesson learned: setting up a full CI environment is quite a task. Getting the repeatable datasets in place (especially when you have time sensitive tests like "provide emails from the last hour") can be tricky. Lesson 2: you will need more compute than expected, plan for parallel testing
  • Ownership
    David owned performance, Michael the build process, Raj the Query Parser, Christopher the test automation and myself the query strategy and core classes. It didn't mean: being the (l)only coder, but feeling responsible and taking the lead in the specific module. With the sense of ownership at the code level, we experienced a number of refactoring exercises, to the benefit of the result, that would never have happened if we followed Code Monkey style an analyst's or architect's blueprint.
As usual YMMV


The Cloud Awakening

It is a decade since Amazon pioneered cloud as a computing model. Buying ready made applications (SaaS) enabled non-IT people to quickly accquire solutions IT, starved of budget, skills and business focus, couldn't or didn't want to deliver. Products like Salesforce or Dropbox became household brands.
But the IT departments got a slice of cloud cake too in form of IaaS. For most IT managers IaaS feels like the extension of their virtualization stragegy, just running in a different data center. They still would patch operating systems, deploy middleware, design never-to-fail platforms. They are in for an awakening.
Perched in the middle between SaaS and IaaS you find the cloud age's middleware: PaaS. PaaS is a mix that reaches from almost virtual machines like Docker to compute plaforms like IBM Bluemix runtimes, Amazon Elastic Beanstalk, Google Compute Engine all the way to the new Nano services like AWS Lambda, Google Cloud Functions or IBM OpenWhisk. Without closer inspection a middleware professional would sound a sigh of relief: middleware is here to stay.
Not so fast! What changed?
There's an old joke that claims, IBM WebSphere architecture allows to build one cluster to run the planet on and to survive mankind running. So the guiding principles are: provide a platform for everything, never go down. We spend time and time (and budget) on this premise: middleware is always running. Not in the brave new world of cloud. Instead of having one rigid structure that runs and runs, a swarm of light compute (like WebSphere Liberty) does one task each an one task can run on a whole swarm of compute. Instead of robust and stable these systems are resilient, summed up in the catch phrase: Fail fast, recover faster.
In a classical middleware environment the failure of a component is considered catastrophic (even if mitigated by a cluster), in a cloud environment: that's what's expected. A little bit like a bespoke restaurant that stays closed when the chef is sick vs. a burger joint, where one of the patty flippers not showing up is barley noticeable.
This requires a rethink: middleware instances become standardized, smaller, replaceable and repeatable. Gone are the days where one could spend a week installing a portal (as I has the pleasure a decade ago). The rethink goes further: applications can't be a "do-it-all" in one big fat junk. First they can't run on these small instances, secondly they take to long to boot, third they are a nightmare to maintain and extend. The solution is DevOps and Microservices. Your compute hits the memory or CPU limit? No problem, all PaaS platforms provide a scale out. Its fun to watch in test how classic developed software fails in these scenarios: suddenly the Singleton that controls record access isn't so single anymore. It has evil twins on each instance.
Your aiming at availability? The classical approach is to have multi-way clusters (which at the end don't do much if the primary member never goes down). In the PaaS area: have enough instances around. Even if an individual instance has only 90% availability (a catastrophic result in classic middleware), the swarm of runtimes at a moderate member count gets you to your triple digits after the dot. You can't guarantee that Joe will flip the burgers all the time, but you know: someone will be working on any given day.
And that's the cloud awakening: transit from solid to resilient*, from taking for granted to work with what is there - may the howling begin.

* For the record: How many monarchs, who had SOLID castles are still in charge? In a complex world resilience is the key to survival


Designing a Web Frontend Development Workflow

In the the web 'you can do anything' extends to how you develop too. With every possible path open, most developers, me included, lack direction - at least initially. To bring order to the mess I will document considerations and approaches to design a development workflow that makes sense. It will be opinionated, with probably changing opinions along the way.
Firstly I will outline design goals, then tools at hand to finally propose a solution approach.

Design Goals

With the outcome in mind it becomes easier to discover the steps. The design golas don't have equal weight and depending on your preferences you might add or remove some of them
  1. Designed for the professional
    The flow needs to make life easier when you know what you are doing. It doesn't need to try to hide any complexity away. It shall take the dull parts away from the developer, so (s)he can focus on functionality and code
  2. Easy to get started
    Some scaffolding shall allow the developer to have an instant framework in place that can be modified, adjusted and extended. That might be a scaffolding tool, a clonable project or a zip file
  3. Convention over configuration
    A team member or a new maintainer needs to be able to switch between different projects and 'feel at home' in all of them. This requires (directory) structures, procedural steps and conventions to be universal. A simple example: how do you start your application locally? Will it be npm start or gulp serve or grunt serve or nodemon or ?
  4. Suitable for team development
    Both the product code and the build script need to be modular. When one developer is adding a route and the needed functionality for module A, it must not conflict with code another developer writes for module B. This rules out central routing files, manual addition of css and js files
  5. Structured by function, not code type
    A lot of the example out there put templates in on directory, controllers in another and directives yet into another. A better way is to group them by module, so files live together in a single location (Strongly influenced by a style guide)
  6. Suitable for build automation
    Strongly influenced by Bluemix Build & Deploy I grew fond of: check in code into the respective branch (using git-flow) and magically the running version appears on the dev, uat or production site. When using a Jenkins based approach that means that the build script needs to be self contained (short of having to install node/npm) and can't rely on tools in the global path
  7. React to changes
    A no-brainer: when editing a file, be it in an editor or an ide, the browser needs to reload the UI. Depending on the file (e.g. less or typescript) a compile step needs to happen. Bonus track: newly appearing files are handled too
  8. One touch extensibility
    When creating a new module or adding a new dependency there must not be a need of a "secondary" action like adding the JS or CSS definition to the index.html or manually adding a central route file to make that known
  9. Testable
    The build flow needs to have provisions to run unit tests, integration tests, code coverage reports, cshint, jslint, trace-analysis etc. Code that "oh it does work" isn't enough. Code needs to pass tests and style conventions. The tests need to be able to run in the build automation too
  10. Extensible and maintainable
    Basing a workflow on a looooong build script turns easily into a maintenance task from hell. A collection of chainable tasks/modules/files can keep that in check
  11. Minimalistic (on the output)
    Keep the network out of the user experience. A good workflow minimizes both the number of calls as well as the size of http transmission. While in development all modules need to be nicely separated, in production I want as little as possible css, html and js files. So once the UI is loaded any calls on the network are limited to application data and not application logic or layout
A very good starting point is John Papa's Yeoman generator HotTowel. It's not perfect: the layout overwrites the index.html on each new dependency/module violating goal #4 and it depends on outdated gulp modules - goal #10 (I had some fun when I tried to swap gulp-minify-css, as recommended, for gulp-cssnano and thereafter font-awesome wouldn't load since the minified css had a line comment in front of the font definition). I also don't specifically like/care (for my use case) to have the server component in the project. I usually keep them separate and just proxy my preview to my server project.

Tools at hand

There are quite some candidates: gulp, grunt, bower, bigrig, postman, yeoman, browserify, Webpack, Testling and many others. Some advocate npm scripts to be sufficient.
In a nutshell: there are plenty of options.
One interesting finding: Sam Saccone did research what overhead es6 a.k.a es2015 has over current es5. TypeScript (using browserify) performed quite well. This makes it a clear candidate, especially when looking at AngularJS 2
Next up: Baby steps


Developer or Coder? -Part 1

Based on recent article I was asked: "So how would you train a developer, to be a real developer, not just a coder?". Interesting question. Regardless of language or platform (maybe short of COBOL, where you visit retirement homes a lot), each training path has large commonalities.
Below I outline a training path for a web developer. I'm quite opinionated about tools and frameworks to use, but wide open about tools to know. The list doesn't represent a recommended sequence, that would be a subject of an entire different discussion:


It's just HTML, CSS and JavaScript isn't it?

Web development is easy isn't it? After all it is just the open standards of HTML, CSS and JavaScript.
That's roughly the same fallacy as to say: It's easy to write a great novel, since it is just words and grammar.
The simple reason: It is hard.
I'll outline my very opinonate version of what skills you need for mastery of front-end programming. The list doesn't include the skills to figure out what to program, just how. So here it goes:


Jeffry Veen, who'm I had the pleasure to meet in Hong Kong once, summarized it nicely in his book: The Art and Science of Web Design already in 2000: "HTML provides structure, CSS layout and JavaScript behavior". Like English, development needs a style guide (and the mastery of it), web development has several guides, besides the plain definitions mentioned in the beginning.


  • The standard today is HTML5. So ignore any screams "But it must run in IE6" and see what HTML5 can do. There's a handy list of compatible elements you can check for reference
  • The predominant guideline for structure is Twitter bootstrap. It defines a layout and structure language, so your page structures become understandable by many developers. Twitter bootstrap contains CSS and JavaScript too, since they are intertwined
  • If you don't like Bootstrap, there are other options, but you have been warned
  • The other HTML structure to look at is the Ionic framework. Again it has more than HTML only


Most frameworks include CSS, so when you picked one above, you are pretty much covered. However understanding it well takes a while, so you can modify the template you started with. My still favourite place to get a feel for the power of CSS is CSS Zengarden. The W3C provides a collection of links to tutorials to deepen your knowledge. My strong advice to the developers: don't touch it initially. Use a template. Once your application is functional, then revisit the CSS (or let a designer do that).


Probably the most controversial part. IBM uses Dojo big time, ReactJS is gaining traction and there is Aurelia up and coming. So there's lot to watch out for. But that is not where you get started.
  • Start learning JavaScript including ES6. Some like CoffeeScript too, but not necessarily for starters
  • The most popular core library is jQuery, so get to know it. The $ operator is quite convenient and powerful
  • To build MVC style single page applications I recommend to use AngularJS. It has a huge body of knowledge, a nice form module and an active community. Version 2.0 will have a huge boost in performance too. Make sure you know John Papa's style guide and follow it
  • And again: have a look at Ionic and Cordova (used in Ionic) for mobile development. It uses AngularJS under the hood
There are tons of additional libraries around for all sorts of requirements, which probably warrants another article.


With all the complexity around, you don't have to start from scratch. There are plenty of templates around, that, free or a little fee, give you a head start. Be very clear: they are an incredible resource when you know how all is working together, but they don't relieve you from learning the skill. Here are my favourites

Tools and resources

Notepad is not a development tool. There are quite some you need if you want to be productive
  • You need a capable editor, luckily you have plenty of choices or you opt for one or the other IDE
  • Node.js: a JavaScript runtime based on Google's V8 engine. It provides all runtime for all the other tools. If you don't have a node.js installation on your developer workstation, where were you hiding the last 2 years?
  • Bower: a dependency manager for browser files like CSS, JS with their various frameworks. You add a depencency (e.g. Bootstrap) to the bower.json file and Bower will find the right version for you
  • Grunt: a task runner application, used for assembly of web sites/applications (or any other stuff). Configured using a Gruntfile.js: it can do all sorts of steps like: copy files, combine and minify, check code quality, run tests etc.
  • Yeoman: A application scaffolding tool. It puts all the tools and needed configurations together. It uses generators to blueprint different applications from classic web, to reveal application to mobile hybrid apps. The site lists hundreds of generators and you are invited to modify them or roll your own. I like generator-angularand mcfly
  • GIT: the version control system. Use it from the command line, your IDE or a version control client
  • Watch NPM for new development

Those are the tools to mastery, when you just want HTML, CSS and JavaScript


Multitenancy - a blast from the past?

Wikipedia defines multitenancy as: "Software Multitenancy refers to a software architecture in which a single instance of a software runs on a server and serves multiple tenants ... Multitenancy contrasts with multi-instance architectures, where separate software instances operate on behalf of different tenants "
Looking at contemporary architectures like Docker, OpenStack, Bluemix, AWS or Azure, I can't see any actual payloads being multi-tenant. Applications by large run single tenant. The prevalent (and IMHO only current valid use case) is the administrative components that manages the rollout of the individual services. So instead of building that one huge system, you "only" need to have a management layer (and enough runtime).
Multi-Tenant architecture
The typical argument for a multitenancy deployment is rooted in the experience that platforms are expensive and hard to build. So when one platform is ready, it is better everybody uses it. With Virtual Machines, Containers and PaaS this argument becomes weaker and weaker. The added complexity in code outweighs the cost for spinning up another instance (which can be done in minutes).
A multi-instance architecture mitigates these efforts:
Multi-Instance architecture
The burden to manage the tenants lies with the Identity and Entitlement management. Quite often (see Bluemix) those are only accessed by admin and development personnel, so their load is lighter. So next time someone asks for multitenancy, send them thinking.


The Rise of JavaScript and Docker

I loosely used JavaScript in this headline to refers to a set of technologies: node.js, Meteor, Angular.js ( or React.js). They share a communality with Docker that explains their (pun intended) meteoric rise.
Lets take a step back:
JavaScript on the server isn't exactly new. The first server side JavaScript was implemented 1998 and the Union mount, that made Docker possible, is from 1990. Client side JavaScript frameworks are plenty too. So what made the mentioned ones so successful?
I make the claim that it is machine readable community. This is where these tools differ. node.js is inseparable from its packet manager npm. Docker is unimaginable without its registry and Angular/React (as well as jquery) live on cushions of myriads of plug-ins and extensions. While the registries/repositories are native to Docker and node.js, the front-ends take advantage of tools like Bower and Yeoman, that make all the packaged feel native.
These registries aren't read-only, which is a huge point. Providing the means of direct contribution and/or branching on GitHub the process of contribution and consumption became two way. The mere possibility to "give back" created a stronger sense of belonging (even if that sense might not be fully concious).
machine readable community is a natural evolution born out of the open source spirit. For decades developers have collaborated using chat (IRC anyone), discussion boards, Q & A sites and code sharing places. With the emergence of GIT and GitHub as de facto standard for code sharing the community was ready.
The direct access from scripts and configurations to source repository replaced the flow of "human vetting, human download, human unpack and copy to the right location" with: "specify what you need and the machine will know where to get it". Even this idea wasn't new. In the Java the Maven plug-in provided that functionality since 2002.
The big difference now: Maven wasn't native to Java, as it required a change of habit. Things are done differently with it than without. npm on the other hand is "how you do things in node.js". Configuring a docker container is done using the registry (and you have to put in extra effort if you want to avoid that).
So all the new tooling use repositories as "this is how it works" and complement human readable community with machine readable community. Of course, there is technical merit too - but that has been discussed elsewhere in great length.


A peek in my JavaScript Toolbox

Every craftsman has a toolbox, except developers: we have many. For every type of challenge we use a different box. Here's a peek into my web front-end programming collection. It works with any of your favorite backends. In no specific order:
  • AngularJS

    one of the popular data binding frameworks, created by Google engineers. With a focus on extensibility, testability and clear seperation of concerns it allows to build clean MVC style applications
  • Data Driven Documents

    short: D3JS. If anything needs to be visualized d3js can deliver. Go and check out the samples. There are a set of abstractions on top of it that make things simpler. I consider d3js the gold standard of what is possible in JS visualizations
  • Mustache

    Logicless templating for any language. I use it where I can't / wont use AngularJS' templating
  • PivotTable.js

    We love to slice and dice our data. Instead of downloading and spreadsheet processing them, I use this JavaScript library.
  • Angular-Gantt

    Timeline bound data loves gantt charts. This components makes it easy to visualize them
  • TemaSYS

    A wrapper around WebRTC. It allows to add voice and video to your application in an instant, no heavy backend required

    PredictionIO is an open source machine learning server for software developers to create predictive features, such as personalization, recommendation and content discovery. Competes with IBM's Watson
  • Workflow

    I'm not a big fan of graphical workflow editors. You end up spening lots of time drawing stuff. I'd rather let the system do the drawings
    • Sequence Diagrams
      Visualize how the flow between actors in a system flows. Great to show who did what to whom in Game of Thrones
    • JS Flowchart
      Visualize a flow with conditional branches. I contributed the ability to color code the diagram, so you can show: current path, branches not taken, current step and undecided route. (there are others)
  • Reporting

    Reports should be deeply integrated into the UI and not being standalone.
  • Card UI

    While not exactly JavaScript, designing with cards is fashionable. I like Google's material design explaining cards
    • Bootcards
      Twitter Bootstrap meets cardUI. Lots of quality details to generate a close to native experience
    • Swing
      Swipe left/right for Yes/No answers
  • Tools

    I haven't settled for an editor yet. Contestants are Geany, Eclipse (with plug-ins), Webstorm, Sublime or others. Other tools are clearer:
    • JSHint
      Check your JavaScript for good style
    • Bower
      JavaScript (and other front-end matters) dependency management. It is like mvn for front-ends
    • Grunt
      A JavaScript task runner. It does run a preview server, unit tests, package and deployment tasks. Watching its competitor Gulp closely
    • Yeoman
      Scaffolding tool to combine, grunt, bower and typical libraries
    • Cloudant
      NoSQL JSON friendly database. Works with for offline first and a JavaScript browser database
    • GenyMotion
      Fast Android emulator


Enterprise architecture - from Silos to Layers

In a recent discussion with a client the approaches and merits of Enterprise Architecture took center stage. IBM for a very long time proposed SOA (service oriented architecture) which today mostly gets implemented in a cloud stack. While it looks easy from high enough above, the devils is in the details. Mostly in the details how to get there. The client had an application landscape that was segmented along a full stack development platform with little or no interaction between the segments or silos:
Silo based Enterprise Architecture
The challenges they are facing are:
  • No consistency in user experience
  • Difficult to negotiate interfaces point-to-point
  • No development synergies
  • Growing backlog of applications
In the discussion I suggested to first have a look at adoption all principles to successfully pass the Spolsky test. Secondly transform their internal infrastructure to be cloud based, so when need arises workloads could easily be shifted to public cloud providers. The biggest change would be to flip the silos and provide a series of layers that are used across the technologies. A very important aspect in the layer architecture is the use of Design by contract principles. The inner workings of the layer are, as much as sensible, hidden by an API contract. So when you e.g. retrieve a customer information, it could come from SAP, Notes, RDBMS or NoSQL, you wouldn't know and you wouldn't care.


Keeping up with all the GIT

Unless you stuck in the last century, you might have noticed, that the gold standard for version control is GIT. Atlassian likes it, IBM DevOps supports it and of course the Linux Kernel is build with it.
The prime destination for opensource projects is GitHub, with BitBucket coming in strong too. Getting the code of a project you work with (and I bet you do - jquery anyone) is just a git clone away. Of course that opens the challenge to keep up with all the changes and updates. While in the projects you work on, a branch, pull and push is daily work - using the command line or a nice UI. For that "keep an eye one" projects this gets tedious quite fast.
I'm using a little script (you even could cron it or attach it to a successful network connection) to keep all my "read-only" repositories up-to-date. You need to change the basedir= to match your path. Enjoy
# Helper script to keep all the things I pulled from GITHUB updated
# most of them are in ~/github, but some are somewhere else

# Pulls a repository from GIT origin or Mercurial
syncrep() {
	echo "Processing $f file..."
	cd $1
	isHG=`find -maxdepth 1 -type d -name ".hg"`
	if [ -n "$isHG"]
		git pull origin master &
		echo "$f is a Mercurial directory"
		hg pull


# Part 1: all in ~/github
notify-send -t 20000 -u low -i gtk-dialog-info "Starting GIT threaded update"
for f in $FILES
	syncrep $f

# Part 2: all in ~/company
notify-send -t 20000 -u low -i gtk-dialog-info "Starting COMPANY threaded update"
for f in $FILES
	syncrep $f

cd ~
notify-send -t 20000 -u low -i gtk-dialog-info "All GIT pull requests are on their way!"

# Wait for the result
while [ "$stillrunning" -eq "0" ]
	sleep 60
	pgrep git- > /dev/null
notify-send -t 20000 -u low -i gtk-dialog-info "GIT pull updates completed"

A little caveat: when you actively work on a repository you might not want the combination origin - master, so be aware: as usual YMMV


Collaboration in context

Harry, a storm is coming, at least if you follow Cary Youman. Nothing less that the way we collaborate will be, again, a focus for IBM. The need has not found a definite solution. The attempt to reinvent eMail is starving in the incubator. Great minds try to reinvent the conversation (and looks suspiciously like Wave). So what is so tricky about collaboration?
In short it is context, the famous 5 W. In our hyperconnected world context can get big rather fast:
Collaboration In Context
An eMail system usually provides limited context: From, When, Subject. Using Tools and Advanced Analytics modern systems try to spice that context. Other shoot the messenger without addressing the next level of problem: Flood vs. Scatter


Foundation of Software Development

When you learn cooking, there are a few basic skills that need to be in place before you can get started: cutting, measuring, stiring and understanding of temperature's impact on food items. These skills are independent from what you want to cook: western, Chinese, Indian, Korean or Space Food.
The same applies to software development. Interestingly we try to delegate these skills to ui designers, architects, project managers analyst or infrastructure owners. To be a good developer, you don't need to excel in all of those skills, but at least develop a sound understanding of the problem domain. There are a few resources that I consider absolute essential: All these resources are pretty independent from what language, mechanism or platform you actually use, so they provide value to anyone in the field of software development.
As usual YMMV


Long Term Storage and Retention

Not just since Einstein time is relative. For a human brain anything above 3 seconds is long term. In IT this is a little more complex.

Once a work artefact is completed, it runs through a legal vetting and it either goes to medium or long term storage. I'll explain the difference in a second. This logical flow manifests itself in multiple ways in concrete implementations: Journaling (both eMail and databases), archival, backups, write-once copies. Quite often all artifacts go to medium term storage anyway and only make it into long term storage when the legal criteria are met. Criteria can be:
  • Corporate & Trade law (e.g. the typical period in Singapore is 5 years)
  • International law
  • Criminal law
  • Contractual obligations (E.g. in the airline industry all plane related artefacts need to be kept at least until the last of that plane family has retired. E.g. the Boing 747 family is in service for more than 40 years)
For a successful retention strategy three challenges need to be overcome:
  1. Data Extraction

    When your production system doesn't provide retention capabilities, how to get the data out? In Domino that's not an issue, since it does provide robust storage for 25 years (you still need to BACKUP data). However if you want a cross application solution, have a look at IBM's Content Collector family of products (Of course other vendor's have solutions too, but I'm not on their payroll)
  2. Findability

    Now an artifact is in the archive, how to find it? Both navigation and search need to be provided. Here a clever use of Meta data (who, what, when, where) makes the difference between a useful system and a Bit graveyard. Meta data isn't an abstract concept, but the ISO 16684-1:2012 standard. And - YES - it uses the Dublin core, not to confuse with Dublin's ale
  3. Consumability / Resillience

    Once you found an artifact, can you open and inspect it. This very much boils down to: do you have software that can read and render this file format?
The last item (and the second to some extend) make the difference between mid-term and long-term storage. In a mid-term storage system you presume that, short of potential version upgrades, your software landscape doesn't change and the original software is still actively available when a need for retrieval arises. Furthermore you expect your retention system to stay the same.
On the other hand, in a long-term storage scenario you can't rely on a specific software for either search or artifact rendering. So you need to plan a little more carefully. Most binary formats fall short of that challenge. Furthermore your artefacts must be able to "carry" their meta data, so a search application can rebuild an access index when needed. That is one of the reasons why airline maintenance manuals are stored in DITA rather than an office format (note: docx is not compliant to ISO/IEC 29500 Strict).
The problem domain is known as Digital Preservation and has a reference implementation and congressional attention.
In a nutshell: keep your data as XML, PDF/A or TIFF. MIME could work too, it is good with meta data after all and it is native to eMail. The MIME-Trap to avoid are MIME-parts that are propriety binary (e.g. your attached office document). So proceed with caution
Neither PST, OST nor NSF are long term storage suitable (you still can use the NSF as the search database)
To be fully sure a long term storage would retain the original format (if required) as well as a vendor independent format.


Workflow for beginners, Standards, Concepts and Confusion

The nature of collaboration is the flow of information. So naturally I get asked about Workflows and its incarnation in IT systems a lot. Many of the question point to a fundamental confusion what Worflow is, and what it isn't. This entry will attempt to clarify concepts and terminology
Wikipedia sums it up nicely: "A workflow consists of an orchestrated and repeatable pattern of business activity enabled by the systematic organization of resources into processes that transform materials, provide services, or process information. It can be depicted as a sequence of operations, declared as work of a person or group,[2] an organization of staff, or one or more simple or complex mechanisms".
Notably absent from the definition are: "IT system", "software", "flowchart" or "approval". These are all aspects of the implementation of a specific workflow system, not the whole of it. The Workflow Management Coalition (WfMC) has all the specifications, but they might appear as being written in a mix of Technobabble and Legalese, so I sum them up in my own words here:
  • A workflow has a business outcome as a goal. I personally find that a quite narrow definition, unless you agree: "Spring cleaning is serious business". So I would say: a collection of steps, that have been designed to be repeatable, to make it easier to achieve an outcome. So a workflow is an action pattern, the execution of a process. It helps to save time and resources, when it is well designed and can be a nightmare when mis-fitted
  • A workflow has an (abstract) definition and zero or more actual instances where the workflow is executed. Like: "Spring cleaning requires: vacuuming, wiping, washing" (abstract). "Spring cleaning my apartment on March 21, 2014" (actual). Here lies the first challenge: can - and how much - a workflow instance deviate from the definition. How have cases to be handled when the definition changes in the middle of the flow execution? How to handle workflow instances that do require more or less steps? When is a step mandatory for regulatory compliance?
  • A workflow has one or more actors. In a typical approval workflow the first actor is called requestor, followed by one or more approvers. But actors are not limited to humans and the act of approving or rejecting. A workflow actor can be a piece of software that adds information to a flow based on a set of criteria. A typical architecture for automated actors is SOA
  • Workflow systems have different magnitudes. The flagship products orchestrate flows across multiple independent systems and eventually across corporate boundaries, while I suspect that the actual bulk of (approval) flows runs in self contained applications, that might use coded flow rules, internal or external flow engines
  • On the other end of scale eMail can be found, where the flow and sequence are hidden in the heads of the participants or scribbled into freeform text
  • Workflows can be described in Use Cases, where the quality depends on the completeness of the description, especially the exception handling. A lot of Business Process Reengineering that is supposed to simplify workflows fails due to incomplete exception handling and people start to work "around the system" (eMail flood anyone?)
  • A workflow definition has a business case and describes the various steps. The number of steps can be bound by rules (e.g. "the more expensive, the more approvers are needed" or "if the good transported is HazMat approval by the environmental agency is needed") that get interpreted (yes/no) in the workflow instance
  • Determine the next actor(s) is a task that combines the workflow instance step with a Role resolver. That's the least understood and most critical part of a flow definition. Lets look at a purchase approval flow definition: "The requestor submits the request to approval by the team lead, to be approved by the department head for final confirmation by the controller". There are 4 roles to resolve. This happens in context of the request and the organisational hierarchy. The interesting question: if a resolver returns more than one person, what to do? Pick one randon, round robin or else how?
  • A role resolver can run on submission of a flow or at each step. People change roles, delegate responsibilities or are absent, so results change. Even if a (human) workflow step already has a person assigned, a workflow resolver is needed. That person might have delegated a specific flow, for a period (leave) or permanently (work load distribution). So Jane Doe might have delegated all approvals below sum X to her assistant John Doe (not related), but that doesn't get reflected in the flow definition, only in the role resolution
  • Most workflow systems gloss over the importance of a role resolver. Often the role resolver is represented by a rule engine, that gets confused with the flow engine. Both parts need to work in concert. We also find role resolution coded as tree crawler along an organisational tree. Role resolving warrants a complete post of its own (stay tuned)
  • When Workflow is mentioned to NMBU (normal mortal business users), then two impressions pop up instantly: Approvals (perceived as the bulk of flows) and graphical editors. This is roughly as accurate as "It is only Chinese food when it is made with rice". Of course there are ample examples of graphical editors and visualizations. The challenge: the shiny diagrams distract from role definitions, invite overly complex designs and contribute less to a successful implementation than sound business cases and complete exception awareness
  • A surprisingly novel term inside a flow is SLA. There's a natural resistance, that a superior (approver) might be bound by an action of a subordinate to act in a certain time frame. Quite often making the introduction of SLA part of a workflow initiative, provides an incentive to look very carefully to make processes complete and efficient
  • Good process definitions are notoriously hard to write and document. A lot of implementations suffer from a lack of clear definitions. Even when the what is clear, the why gets lost. Social tools like a wiki can help a lot
  • A good workflow system has a meta flow: a process to define a process. That's the part where you usually get blank stares
  • Read a one one or other good book to learn more
There is more to say about workflow, so stay tuned!


Adventures with vert.x, 64Bit and the IBM Notes client

The rising star of web servers currently is node.js, not at least due to the cambrian explosion in available packages with a clever package management system and the fact that "Any application that can be written in JavaScript, will eventually be written in JavaScript" (according to Jeff Atwood).
When talking to IBM Domino or IBM Connections node.js allows for very elegant solutions using the REST APIs. However when talking to a IBM Notes client, it can't do much since an external program needs to be Java or COM, the later on Windows only.
I really like node.js event driven programming model, so I looked around. In result I found vert.x, which does to the JVM, what node.js does to Google's v8 JS runtime. Wikipedia decribes vert.x as "a polyglot event-driven application framework that runs on the Java Virtual Machine ". Vert.x is now an Eclipse project.
While node.js is tied to JavaScript, vert.x is polyglot and supports Java, JavaScript, CoffeeScript, Ruby, Python and Groovy with Scala and others under consideration.
Task one I tried to complete: run a verticle that simply displays the current local Notes user name. Of course exploring new stuff comes with its own set of surprises. As time of writing the stable version of vert.x is 2.1.1 with version 3.0 under heavy development.
Following the discussion, version 3.0 would introduce quite some changes in the API, so I decided to be brave and use the 3.0 development branch to explore.
The fun part: there is not much documentation for 3.x yet, while version 2.x is well covered in various books and the online documentation.
vert.x 3.x is at the edge of new and uses Lamda expressions, so just using Notes' Java 6 runtime was not an option. The Java 8 JRE was due to be installed. Luckily that is rather easy.
The class is rather simple, even after including Notes.jar, getting it to run (more below) not so much:
package com.notessensei.vertx.notes;

import io.vertx.core.Handler;
import io.vertx.core.Vertx;
import io.vertx.core.http.HttpServerOptions;
import io.vertx.core.http.HttpServerRequest;
import io.vertx.core.http.HttpServerResponse;


import lotus.domino.NotesFactory;
import lotus.domino.NotesThread;
import lotus.domino.Session;

public class Demo {
	public static void main(String[] args) throws IOException {
		new Demo();
		int quit = 0;
		while (quit != 113) { // Wait for a keypress
			System.out.println("Press q<Enter> to stop the verticle");
			quit =;
		System.out.println("Veticle terminated");

	private static final int listenport = 8111;

	public Demo() {
		Vertx vertx = Vertx.factory.createVertx();
		HttpServerOptions options = new HttpServerOptions();
				.requestHandler(new Handler<HttpServerRequest>() {
					public void handle(HttpServerRequest req) {
						HttpServerResponse resp = req.response();
								"text/plain; charset=UTF-8");
						StringBuilder b = new StringBuilder();
						try {
							Session s = NotesFactory.createSession();
						} catch (Exception e) {

Starting the verticle looked promising, but once I pointed my browser to http://localhost:8111/ the fun began.


From XML to JSON and back

In the beginning there was csv and the world of application neutral (almost) human readable data formats was good. Then unrest grew and the demand for more structure and contextual information grew. This gave birth to SGML (1986), adopted only by a few initiated.
Only more than a decade later (1998) SGML's offspring XML took centre stage. With broad support for schemas, transformation and tooling the de facto standard for application neutral (almost) human readable data formats was established - and the world was good.
But bracket phobia and a heavy toll, especially on browsers, for processing led to to rise of JSON (starting ca. 2002), which rapidly became the standard for browser-server communication. It it native to JavaScript, very light compared to XML and fast to process. However it lacks (as time of writing) support for an approved schema and transformation language (like XSLT).
This leads to the common scenario that you need both: XML for server to server communication and JSON for in-process and browser to server communication. While they look very similar, XML and JSON are different enough to make transition difficult. XML knows elements and attributes, while JSON only knows key/value pairs. A JSON snippet like this:
{ "name" : "Peter Pan",
  "passport" : "none",
  "girlfriend" : "Tinkerbell",
  "followers" : [{"name" : "Frank"},{"name" : "Paul"}]

can be expressed in XML in various ways:
<?xml version="1.0" encoding="UTF-8"?>
<person name="Peter Pan" passport="none">
    <girfriend name="Tinkerbell" />
        <person name="Frank" />
        <person name="Paul" />

<?xml version="1.0" encoding="UTF-8"?>
    <name>Peter Pan</name>

<?xml version="1.0" encoding="UTF-8"?>
<person name="Peter Pan" passport="none">
    <person name="Tinkerbell" role="girfriend" />
    <person name="Frank" role="follower" />
    <person name="Paul" role="follower" />

(and many others) The JSON object doesn't need a "root name", the XML does. The other way around is easier: each attribute simply becomes a key/value pair. Some XML purist see attributes as evil, I think they do have their place to make relations clearer (is-a vs. has-a) and XML less verbose. So transforming back and forth between XML and JSON needs a "neutral" format. In my XML Session at Entwicklercamp 2014 I demoed how to use a Java class as this neutral format. With the help of the right libraries, that's flexible and efficient.


The folly of root cause analysis

IT support's dealing with management is a funny business. Whenever something goes wrong, support teams engage in "defensive blaming" and the elusive quest for a root cause.
I've seen this quest (and blaming god and country along the way if it doesn't appear) taking priority over problem resolution and prevention. The twisted thought is: "If I'm not sure about the (single) root cause, I can't neither fix nor prevent it from happening again".

Why is that a folly?
  • It paralyses: If a person bleeding enters an ER, the first call to action is to stop the bleeding. Asking: Have all the suppliers of the manufacturer of the blade that caused it have ISO 9000 certifications? Where was the ore mined to make the blade? Root cause analysis is like that. IT support however is an ER room
  • It is not real: There is never a single reason. Example: Two guys walk along the street in opposite directions. One is distracted, because he is freshly in love. The other is distracted because he just was dumped. They run into each other and bump their heads. What is the root cause for it?
    You remove a single factor: route chosen, fallen in love, being dumped, time of leaving the house, speed of walking, lack of discipline to put attention ahead of emotion etc. and the incident never would have happened.
    If I apply an IT style root cause analysis it will turn out: it was the second guy's grandfather who is the root cause: He was such a fierce person, so his son never developed trust, which later in the marriage led to a breakup while guy two was young, traumatising every breakup for him - thus being distracted
  • There is more than one: as the example shows: removing one factor could prevent the incident to happen, but might leave a instable situation. Once a root cause has been announced, the fiction spreads: "everything else is fine". Example: A database crashes. The root cause gets determined: only 500 users can be handled, so a limit is introduced. However the real reason is a faulty hardware controller that malfunctions when heating up, which happens on prolonged high I/O (or when the data center aircon gets maintained).
    The business changes and the application gets used heavier by the 500 users, crossing the temperature threshold and the database crashes again.
  • Cause and effect are not linear: Latest since Heisenberg it is clear that the world is interdependent. Even the ancient Chinese knew that. So it is not, as Newton discovered, a simple action/reaction pair (which suffers from the assumption of "ceteris paribus" which is invalid in dynamic systems), but a system of feedback loops subject to ever changing constraints.
Time is better spend understanding a problem using system thinking: Assess and classify risk factors and problem contributors. Map them with impact, probability, ability to improve and effort to action. Then go and move the factors until the problem vanishes. Don't stop there, build a safety margin.
As usual YMMV


Documents vs eMails

With a public sector customer I had an interesting discussion on non-repudiation, messaging and regulatory control. We were discussing how to ensure awareness of information that has behavioural or legal consequences. While "I didn't know" is hardly a viable defence, relying on the other party to keep themselves updated is just asking for trouble. In a collaborative environment, where a regulator sees itself primarily as the facilitator of orderly conduct and only as policing the conduct as secondary mission, this is inefficient.
An efficient way is a closed loop system of information dissemination and acknowledgement. The closed loop requirement isn't just for regulators, but anybody that shares information resulting in specific behaviour. Just look at the communication pattern of a pilot with air traffic control (paraphrased): Tower: "Flight ABC23 turn to runway 270, descend to 12 thousand feet" - Pilot: "Roger that, turning to 270, descent to 12 thousand"

When we look at eMail, the standard mechanism that seems to get close to this pattern are Return Receipts:
Standard eMail flow
Using RFC3798 - Message Disposition Notification - MDN (commonly referred to as Return Receipt) to capture the "state of acknowledgement", is a folly.
  1. the RFC is completely optional and a messages system can have it switched off (or delete it from the outbox), so it isn't suitable as guarantee
  2. MDN only indicates that a message has been opened. It does not indicate: was it read, was it understood, were the actions understood, was the content accepted (the later one might not be relevant in a regulatory situation). It also doesn't - which is the biggest flaw - indicate what content was opened. If the transmission was incomplete, damaged or intercepted a return receipt wouldn't care.
So some better mechanism is needed!
Using documents that have a better context a closed loop system can be designed. When I say "document" I don't mean: propriety binary or text format file sitting in a file system, but entity (most likely in a database) that has content and meta information. The interesting part is the meta information:
  • Document hierarchy: where does it fit in. For a car manufacturer recalling cars that could be: model, make, year. For a legislator the act and provisions it belongs in
  • Validity: when does it come into effect (so one can browse by enactment date), when does (or did) it expire
  • History: which document(s) did it supersede, which document(s) superseded it
  • Audience: who needs to acknowledge it and how fast. Level of acknowledgement needed (simple confirmation or some questionnaire)
  • Pointer to discussion, FAQ and comments
  • Tags
An email has no structured way to carry such information forward. So a document repository solution is required. On a high level it can look like this:
Document Flow to acknowledge
Messaging is only used for notification of the intended audience. Acknowledgement is not an automatic, but a conscious act of clicking a link and confirming the content. The confirmation would take a copy of original text and sign it, so it becomes clear who acknowledged what. An ideal candidate would be XML Signature, but there isn't a model how to sign that from a browser. There is an emerging w3C standard for browser based Crypto, that has various level of adoption: Once you have dedicated records who has acknowledged a document, you can start chasing, reliable and automated, the missing participants and, if you are a regulator, take punitive actions when chasing fails. It also opens the possibility to run statistics how fast what type of documents get adopted.
The big BUT usually is: I don't want to deploy additional 2 servers for document storage and web access. The solution for that is, you might have guessed it, is Domino. One server can provide all 3 roles easily:
Document Flow on a Domino server
As usual YMMV


Value, Features and Workflows

In sales school we are taught to sell value. Initially that approach was designed to defang the threat of endless haggling over price, but it took an extra twist in the software industry. Since software companies rely on user's desire to "buy the next version" to secure revenue from maintenance and upgrade sales, a feature war was the consequence.
As a result, buyers frequently request feature comparison tables, driving the proponents of "value & vision" up the wall. It also creates tension inside a sales organisation, when a customer asks for a specific feature and the seller is reprimanded for "not selling value". How can this split in expectations be reconciled? As learned in Negotiation Basics, we need to step back and see beyond positions at the interest that drives them:
The seller doesn't want to do feature to feature comparisons, since they never match and are time consuming and tedious. On the other hand showing how trustworthy, visionary and future-prove the product is, makes creating confidence much easier.
The buyer is very aware, that software doesn't have any inherit value, but only its application has. Using software requires invoking its features, so the feature comparison is a proxy for the quality of workflows it can provide in the buyer organisation. The challenge here is that any change in feature set will break somebody's workflow.
An example:
Outlook users quite often add themselves as BCC into an outgoing eMail, so the message appears in the inbox again. From there it is dragged into a folder in an archive, so it is kept in the local, not size restrained PST file where it can be found. IBM Notes doesn't allow to drag from the inbox into a specific folder in an archive. The equivalent workflow: The user doesn't need to add herself to BCC, but simply uses Send & File on the original message. The automatic scheduled Archive task moves the message later without user action required. The searchbox will find messages regardless their location in the main mail file or in one of the archives - same result, IMHO less work - but a different flow.
The solution to this is consultative selling, where a seller looks at the workflows (that are mapped to features of existing tools or practises) and proposes improved workflows based on the feature set of his products and services. A nice little challenge arises, when the flow isn't clear or the proposed product has no advantage.
A little story from the trenches to highlight this: Once upon the time, when files still were mainly paper, a sales guy tried to sell one of my customers a fax server, stating that having the fax on screen, thus eliminating the need to walk to the fax machine, would be really beneficial. He looked quite dumbfounded when the manager asked: "And how do I write on this?". The manager's workflow was to scribble instructions onto incoming faxes or document that it had been acted upon. The software couldn't do that.

In conclusion: there is a clear hierarchy in software: To have a goal and destination, there needs to be a vision, that vision needs to be supported by software that has value in implementing this vision. Value is generated by supporting and improving workflows by that software. Workflows use one or more features of the application. Comparing features is aiming one level too low, the workflows are the real value generators. A change in software most likely requires a change in workflow.

Quite a challenge!


MongoDB to switch to IBM storage backend

One of the rising stars in NoSQL land is MongoDB. It is prominently featured in IBM BlueMix and in conjunction with Node.js the darling of the startup scene.
However it isn't trouble free, has been called broken by design, bad for data and a folly.
In a bold move to silence all critiques, the makers turned to IBM to get access to a distributed, robust and secure backend storage engine: the venerable NSF. As Bryce Nyeggen clearly stated:"But actually, that’s the Tao-like genius of MongoDB – having absolutely nothing new ", this move fits nicely into the all over strategy. On top of that thousands of XPages developers get instant access to MongoDB's APIs and coolness. Clearly an industry wide win-win!


Learning a new language or platform

The first programming language I learned was COBOL both using a Microfocus compiler (with an innovative tool called Animator, today commonly refered to as "Source level debugger") and on an IBM System /36. Until today I think Cobol is cool, not at least since you can reuse its source code, read aloud, as tranquiliser and only in COBOL this compiles without error:
PERFORM makemoney UNTIL rich.
You have to read "full stop" at the end to get all COBOL nuts laughing, because when you missed it, the compiler would mess up (they guy who invented automatic semicolon insertion in JavaScript must have been traumatised by that).
After that dBase, Clipper, Foxpro followed and soon VisualBasic. My early dBase very much looked like a COBOL program, while you could discover both dBase and COBOL thinking in my VB and LotusScript.
It is actually fun to look at source code and guess what was the previous language of the coder. It takes quite a while until old habits do die.
In case of COBOL, they never do, since we cleverly camouflaged the DATA DIVISION. as XML Schema to retain thinking in structured, hierarchical data.
So when you look at moving your applications to XPages, as a classical Domino developer, you are faces with quite some challenge:
Approaching a new platform requires skills
A common approach for skill acquisition is to entrust the very first introduction to a training or coaching session, in the vague hope that everything will sort out, right at the beginning. IMHO that is a waste of everybody's time, once a language or platform has been established for a while.
It is like signing up for a "Business Chinese conversations" class when you haven't mastered a single word yet (In case you want to learn Chinese, check out Chineasy).
A more sensible approach is to work through the basics and then discuss with a trainer or coach the best approach. This holds especially true in XPages, where some of the advanced stuff (ExtLib and beans, to name two) make development so much more easy, but scare off the novice developer.
So with 40h of video (yes, that's a whole work week - for normal mortals, for geeks, that's until Tue evening) there is little reason to join a class blank and slow everybody down.
Most developers I guided towards XPages had to (re)learn the mobile interaction patterns and the idea of a stateful web server (a.k.a JSF - not this JSF disaster).
The fastest transition to XPages and mobile apps happen, when the developers spend time on their own with the core platform, to then cross their swords with experienced designers. That's how it is done - so say we all.


Fun with {{Mustache}} and Notes Forms

Creating output from your objects is a never ending story. In XPages we use Expression Language, in classic Notes forms (including $$ViewTemplates). For the JavaScript front-end developers there is an ever growing selection and there's the good old String concatenation. On the JavaScript side I like AngularJS and Mustache.
The big question with templating is: how much logic should go into the template. Mustache is one of the logic-less approaches that expects most of the logic to be presented by the controller. Logic is limited to repeats for collections and conditional rendering if a element is there or not. I like Mustache, because it is polyglot and can be used in more than one language. The creator of the Java version is Sam Spullara who has a nice explanation on Mustache logic-less, so go and read it. Being logic-less reduces the temptation to break the MVC pattern.
Mentioning "view" always draws the mental picture of a User Interface (with access to an controller), but there are other use cases: generating a report or transforming one source code into another. I like to do these activities using XSLT (Come and see that in action next week) when the result is XML (including HTML), but that approach is not suitable when the outcome would be mainly plain text. Mustache makes that task very easy: Create a sample file and then replace the sample data with {{variables}}.
So I was looking how to apply that for XPages. The sample template is rather simple - more useful templates are subject to later posts:
Test Result
Class: {{.}}
Form: {{form}}
{{fieldName}}, {{fieldType}} {{#multiValue}}, Multivalue {{/multiValue}}
The interesting part is the Java component. With a small helper I extract the fields from a form (from an On Disk Project) and render the result. You could generate custom XPages out of existing forms, or generate data Objects to bind to (instead of binding to a document directly. You can generate reports. There's no limit for ideas (actually there is still the 86400 seconds/day limit ). My helpers look like this:
package com.notessensei.domistache;

import com.github.mustachejava.DefaultMustacheFactory;
import com.github.mustachejava.Mustache;
import com.github.mustachejava.MustacheFactory;

public class CommandLine {

	public static void main(String[] args) throws IOException {
		if (args.length < 3) {
			System.out.println("Usage: domistache DXLSource template output");

		CommandLine cl = new CommandLine(args[0], args[1], args[2]);


	private final String	sourceName;
	private final String	outputName;
	private final String	templateName;

	public CommandLine(String sourceName, String templateName, String outputName) {
		this.sourceName = sourceName;
		this.templateName = templateName;
		this.outputName = outputName;

	public void convert() throws IOException {
		File source = new File(this.sourceName);
		File target = new File(this.outputName);
		File template = new File(this.templateName);

		InputStream templateStream = new FileInputStream(template);
		InputStream sourceStream = new FileInputStream(source);
		OutputStream out = new FileOutputStream(target);

		CoreConverter core = new CoreConverter(this.getTemplate(templateStream));
		FormConverter form = new FormConverter(sourceStream);

		form.convert(core, out);


	private Mustache getTemplate(InputStream in) {
		BufferedReader r = new BufferedReader(new InputStreamReader(in));
		MustacheFactory mf = new DefaultMustacheFactory();
		return mf.compile(r, "template");


Numbers are numbers, you have to see it! - Selenium edition

When looking at performance data and comparisons, numbers are just that: "X is 23% faster than y" is a statement few people can actually visualize. You have to see it in action to get a feel for the real difference. Applies to vehicles and web sites in the same manner.
Instinctively one would opt for a load test to see loading speeds, but after checking options I found a functional test will do just fine. My tool of choice here is Selenium WebDriver. It can be easily integrated into JUnit test and with a little effort even automatically record the whole session. So here is my test plan:
  1. Get a list of 2-3 URLs from the command line
  2. Open a new clean browser session for the number of URLs fetched
  3. Position the browser windows next to each other, so each has the same size
  4. Wait for the user hitting enter to start (so (s)he can adjust window sizes or resequence them)
  5. Spin of one thread for each URL to load the page
  6. Wait again
  7. Tear down the setup
Sounds much more complicated than it actually is. The whole code is about one hundred lines and can be easily extended to do more things. Selenium provides an IDE that assists to some extend getting started. I like Selenium for a number of reasons:
  • Can be fully integrated in JUnit tests
  • No new language to learn (it has bindings for quite some)
  • Functional test can be done without a specific browser using a generic web driver
  • Provides visible browser drivers (for Firefox and others) that by default use a new clean profile (no cache, no cookies)
  • Rich community and tons of examples
  • Can be used in your own code or delegated to cloud based testing service
  • Can test JavaScript, Ajax, Drag & Drop and Mobile
I run the code from a command line window, 3 lines high, perched at the bottom of my screen, so it doesn't get into the way of the big browser windows. Here comes the code:


A short history of directory trees

I would like, if I may, to take you on a strange journey: Where did directory trees come from?
This isn't about flowers and bees or our green friends, who missed the evolutionary advantage to emit WIFI signals, but the constructs we rely on for authentication and keeping the network in order, in other words: your directory.

Banyan VINES

In the beginning there was Banyan VINES and StreetTalk. While they are gone, they are fondly remembered. Other than the Wikipedia article I recall, that it needed almost 300k of memory (probably with all services loaded) which wasn't good when 640k was the limit.
At that time it was the only system offering directory services as a tree. Its demise was triggered by Banyan's late awakening to the fact, that requiring a propriety OS to run a directory service will lead to a collapse of hardware support.

Novell eDirectory

Along came Novell's eDirectory (ca. 20 years ago). Originally called NDS (Novell Directory Services) it suffered from teething problems giving room for other entrants. Initially it required a Novell Netware 4.x server and almost missed the boat in TCP/IP support.
Similar to VINES, the propriety OS became an issue and only in 2003 eDirectory became multi-platform. Novell had quite some ideas: HTTP based file sharing (webDAV), access via multiple protocols (via LDAP, DSML, SOAP, ODBC, JDBC, JNDI, and ADSI) and storing any information about any network object, relations and access rights.

Embrace, Extend and Extinguish

However the rise of Windows NT made way for the latest entrant: Active Directory from Microsoft. Using its marketing muscles and aptitude for easy to use interfaces, Microsoft swept the directory market and most followed. While loosely based on Open Standards, Microsoft messed with the Kerberos implementation using their successful embrace, extend and extinguish pattern.

Can you see the forest through the trees?

The biggest innovation in Active Directory is the concept of a Directory Forest. Vines and eDirectory only use a single tree. When you ask around for opinions about the feature, you will find, that any consultant paid by the hour will love it. It introduces complexity and endless hours of configuration. IT managers who know their stuff loath it. So what happened?

The leaky abstraction

IMHO this is a case of Leaky abstraction. VINES stored its data in some ISAM file, Netware eDirectory uses FLAIM (I thought they used Btrieve, but that's a minor detail here), while AD uses the JET engine. Originally shared with MS Access, it got forked into JET Blue instead of the more robust (?) SQL server.
Despite the impressive numbers in the specs JET didn't scale as well as expected, so instead of fixing it, large trees were broken down into smaller trees and regrouped into a forest (Wearing my asbestos underwear for this claim).
So the storage model leaked into the product capabilities. Anyway storing anything extra (which is the whole idea of a directory service) is a perilous undertaking, once added to the schema, it never will get out. This led to fun and entertainment and a whole cottage industry of tool vendors.

Buying a product is not a strategy

So you can follow mainstream (and don't argue, I heard them all before) or give it a really hard thought:
  • What do I want to accomplish?
  • Am I comfortable with Jet Blue?
  • How easy can I take the dependency on a single vendor or do I want to be able to select from a rich heritage?
  • Is my long term strategy better served with open standards?
For user management and authentication LDAP (for your own users) or OAuth and/or SAML (for others) might do a better job. AD doesn't do much for end-point management without additional software, so your are not bound by that. The list goes on.
Happy musing!


The 5 Stages

In all areas of live things grown, mature and decline. In Buddhist scripture that is called Samsara, the wheel of life. IT is no exception to it. When something new, a violation of the natural order of things, comes along and displaces beloved technology, every fanboy has to go through the 5 stages of grief:
The 5 stages of grief

  1. Denial
    There is no question OLDTECH is the best in the market, there is nothing that comes close, especially NEWTECH doesn't live up to its expectations. Look at OLDTECH's installed base, the capabilities and the compatibility
  2. Anger
    How on earth anybody can deploy or use NEWTECH? Are they out of their minds? What siren songs does NEWTECH sing, so they are blindly following that new crap? NEWTECH needs to be eliminated from the face of the earth. Why does nobody see the superiority of OLDTECH? It is so obvious!
  3. Bargaining
    OK, how about coexistence of OLDTECH and NEWTECH? Lets add OTHERTECH to OLDTECH, so it is more attractive. If there is a discount, will you stick with OLDTECH? Look at your code base, migration is too expensive! Don't fall prey to the "the grass is greener on the other side" syndrom. Look at all the experts still around, be part of that community
  4. Depression
    How could it happen that OLDTECH fell into decline? It was my everything, my lifeblood (optional: return to step 2). How can IT ever work without OLDTECH? What should I do, now that my heart is lost?
  5. Acceptance
    Nobody can take the memories of the good times with OLDTECH, still there is a decent living to be made with it and demand can sustain me. Actually NEWTECH does look really interesting and exiting now, I'll give it a shot
It has has happened before and it will happen again (so hold back your reaction).
Like in eMail, the solution here is non attachment and letting go


Is SharePoint a Failed Vision for Collaboration?

Rich Blank (of Jive software) makes a case on CMSWire to consider SharePoint a failure for collaboration. Looking closely it isn't SharePoint to fault.
SharePoint with its place concept and flat views is a 1:1 conceptual copy of Lotus Notes implemented with a (then) current Microsoft technology stack. Thus it does have the potential for successful collaboration, as the (then) success of Lotus Notes clearly showed.
With the right effort of adoption any collaborative technology can be successful, be it shared folders (like Dropbox), Lotus Notes, SharePoint, LinkedIn, Jammer or Connections. It is an too common pattern: failure of collaboration gets attributed to the platform, to avoid the necessity to wake up to the fact that the core reason of failure is lack of skills, vision, execution and adoption.
Of course, and I'm certainly biased here, a more people than place/document centric approach makes adoption and collaboration more efficient, effective and pleasant (read: usable), so SharePoint would need to fuse with Yammer and Skype to get there. This still doesn't negate the need to drive adoption.
The collaboration space has still way to go. With all the tools around we celebrate information scatter for the sake of "social collaboration". A key success factor for eMail was "everything in one place (the inbox) at my disposal". The modern collaborative tools (I'm not fond of the term social, since in my part of the world it still has a different meaning) are wanting in place and control. Activity streams (as the transport protocol) seem most promising, since they are open, don't reinvent the wheels (after all they are HTTP and ATOM) and can be contributed/digested in any programming language.
Nevertheless the UIs offered are to consumption and not enough action oriented. Embedded experiences are a step to remedy that, but I still can't act on the stream, only on some of the information that flows by. So there's way to go to make this collaboration effective, efficient and pleasant. Be it Jammer, SharePoint, IBM Notes or IBM Connections (or any of the nice players).


Round-Trip editing experience in web browsers

Our applications are increasingly moving to http(s) based interfaces, that is HTML(5) or apps. Besides the irony, that we abandon client applications on desktops to reintroduce them on mobile devices (is ObjectiveC the VB of the 21st century?), that's a good thing
However from time to time, unless you live in the cloud, we need to integrate into extisting standing desktop applications, mostly but not limited to Office applications. The emerging standard for this is clearly CMIS. Most office packages do support CMIS in one way or the other today. However it does not integrate into file managers like Finder, Explorer or Nautilus (at least not to my best knowledge at time of writing). One could sidestep that shortcoming by using CmisSync, but that only works as long as you have enough local storage and corporate security lets you do so.
Another challenge to sort out: once applications that "surround" Office documents are all HTML based, one should be able to commence editing directly from clicking something in the web based business application. So the criteria are:
  1. Ability to interact with documents using File-Open in applications (ironically the ODMA standard got that working more than 15 years ago, but died due to vendor negligence)
  2. Ability to interact with documents using the default file navigator (Finder, Nautilus, Explorer, mobile devices)
  3. Ability to have a full roundtrip editing experience in the browser. A user clicks on a button or link (or icon), the document opens, editing commences. When hitting the save button the original document is updated, so when someone else clicks on that link the latest updates can be seen there
Once you want to support non-Windows platforms (and work in multiple browsers) ActiveX (the approach Microsoft choose) is ruled out. Enter webDAV. It covers #1 and #2, with some difficulties to a point that the commercial package for webDAV on IBM platforms (Connections, Domino, Quickr) doesn't support Explorer on Windows 7, a flaw it coincidentally shares with a much older package, but not with the current project.
The challenge for #3: Whenever a browser encounters an URL with http(s) it will try to handle that target. If the mime type is not directly handled by the browser it will trigger the download of the file and hand it over to the application (with or without a dialog, depending on your settings). When the user then interacts with that document, it is the downloaded version in a temp directory. Changes are not send back!
On the other hand: if a user opens a file directly in the office application by selecting a webfolder, a mounted file system or directly specifying the URL, then roundtrip editing happens since the office applications check the URL for webDAV capabilities (I had some fun with Apache's TCPMon figuring that out).
The solution for this puzzle is to use a different protocol. Protocols are simply indicators for the operating system what application is in charge. The commonly known are http(s), ftp, ssh, mailto (and gopher for the old generation), but also notes or sap (and others).
For my approach I used webdav(s) as protocol name. On Windows protocols live in the registry, on other operating system in configuration files.
Once configured correctly a little helper application would pick up the webdav(s) URL from the browser, check for the default application and launch it with the URL converted to http(s) on the command line - and voila roundtrip editing happens (check out the project).
Of course some stuff needs handling: the office application doesn't share credentials with the browser, so SSO or other means need to be in place to ensure smooth user experience. Also MS-Office not only probes for webDAV capability of that URL, but also if that server behaves like SharePoint by probing an extra URL (some enlightenment what it is looking for would be nice).
On Linux (and probably OS/X) one doesn't need a C++ program to figure out the right application, a shell script does the trick.


System Administrator's Mantra

All that can be automated is inherently boring
All that is inherently boring will slip my attention
All that slips my attention will lead to trouble
Sooner or later the trouble will catch with me
I therefore vow to fight the beast of boredom
With skills and scripts and automation
To spend time with and for what really matters

You, my user!


Now it's out, keep it running!

Congratulations, your new web business is up an running, your ingenious idea took flight, users are flocking in, the team is growing and they are brimming with ideas. This is the perfect time to take a step back and evaluate what it takes to run a web business that offers a global cloud based service. You need to evaluate what is your core strength and what you leave to others.
There is no hard and fast rule, just a huge set of questions. A global retail and logistics champion runs every piece of hardware in their own data centres, while a file sharing champion doesn't own a single (production) server and leaves its product to be run by a large Platform-as-a-Service provider.
So what do you need to watch out for? Here is my cheat sheet:
What you need to watch out for when running a web business
  • Platform:
    Where is your stuff running? In your office? In a hosting center? With a PaaS provider?
    • Storage:
      What type of information do you store? How is the read to write ratio? Do you need a RDBMS or is a NoSQL database smarter? How much artifacts do you need to store? Do you need to keep the same information in multiple formats (e.g. media)? Can you replicate your storage for backup and high availability?
    • Availability:
      Do you have a maintenance window? How fast does new information need to be processed. If you deal in financial derivated you need to think in micro seconds, for a job ad you might have hours.
      • Deployability: How fast and easy can you deploy a new or updated instance of your application including data?
      • Recoverability: What happens when your primay site goes down?
      • Resilience: you suddenly attract wanted and unwanted attention, how well can you fend it off?
    • Load balancing: how much of you application and data can be distributed?
    • Cache: are you easy on the network? Total carrier bandwidth is still growing slower that the number of bandwidth consuming devices
    • Latency: are you close to your customers? 100 quick Ajax calls with 2ms latency are no delay, but 100 with 500ms kill your application
    • Staging: Does your staging environment match your production, can you run staging test with similar loads?
  • People:
    They make and break your success. Are the barriers to entry as low as possible?
    • Usability: are the objectives for users clear, articulated, tested and plesant? Are the right objectives known. E.g. for my business travel I have 3 criteria to select a hotel: location, quality internet and gym facilities. However this information is scattered around and makes selection very difficult for me. I don't care about restaurants, spa, shops or conference facilities.
    • connected: is your brand the "Scotch-Tape" (Tesa for Germans) of your business? Can users access your system with credentials they have? Registration processes are such a turn-off. But do you ask enough information in time?
  • Mobile:
    Do you look good on mobile? Does your app work is badly connected environment? Is the app experience better than just using the mobile browser? Do you take advantage of device sensors (location, NFC) and capabilities (PIM integration, share function)
  • Processes:
    How Agile are your processes? Don't mix up "Agile" with "undefined". How do you handle:
    • Agility
    • Peer reviews
    • Use cases
    • Release management
    • Feature / bug management
  • Code:
    Bad code can break your business. Did you pick the right language, framework, methodology? Can your people deal with your code base? Is your code base a giant furball or a neat box of Lego blocks?
  • Security:
    Do you execute Penetration testing? How good is your test coverage? Is your service perceived as trustworthy? Do you manage your runtime security? Like: is your JVM current?
As usual YMMV


10 Commandments for public facing web applications

QuickImage Category  
A customer recently asked how a public facing web application on Domino would be different from an Intranet application. In general there shouldn't be a difference. However in a cost/benefit analysis Intranets are usually considered "friendly territory", so less effort is spent on hardening against attacks and poking around (much to my delight, when I actually poke around). With this in mind here you go (in no specific order):
  1. Protect your server: Typically you would have a firewall and reverse proxy that provides access to your application.
    It should be configured to check URLs carefully to ensure no unexpected calls are made from somebody probing database URLs. It is quite some work to get that right (for any platform), but you surly don't want to become "data leak" front-page news.
    There's not much to do on the Domino side, it is mostly the firewall guys' work. Typical attack attempts include stuff like ?ReadViewEntries, $Defaultxx or $First. Of course when you use Ajax calls into views you need to cater for that.
    I would block *all* ?ReadViewEntries and have URL masks for the Ajax calls you plan to use. Be careful with categorized views. Avoid them if possible and always select "hide empty categories". Have an empty $$ViewTemplateDefault that redirects to the application
  2. Mask your URLs: Users shouldn't go to "/newApp/Loand2013/loanapproduction.nsf?Open" but to "/loans". Use Internet site documents to configure that (eventually the firewall/reverse proxy can do that too). In Notes 9.0 IBM provides mod_domino, so you can use the IBM HTTP Server (a.k.a Apache HTTP) to front Domino. On the XPagesWiki there is more information on securing URLs with redirects. Go and read them
  3. Harden your agents: Do not allow any ?OpenAgent URL (challenge: an agent also opens on ?Open, so if all agents have a certain naming you can use URL pattern to block them). In an agent make sure your code handles errors properly. Check where the call to an agent came from. If it was called directly discard it.
  4. Treat data with suspicion: Do not rely on client side validation. Providing it is nice for the user as comfortable input aid. However you don't control the devices and browsers anymore and an attacker can use Firebug or CURL to bypass any of your validations. You have to validate everything on the server (again). Also you have to check content for unexpected input like passthru HTML or JavaScript. XPages does that for you
  5. Know your user: Split your application into more than one database. One for the publicly accessible content (access anonymous) and one that requires authentication. Do not try to dodge authenticated users and re-invent security mechanisms. You *will* overlook something and then your organisation makes headline news in the "latest data breach" section. There are ample examples how to generate LTPA tokens outside of Domino, so you don't need to manage usernames/passwords etc if you don't want to. Connect them to your existing customer authentication scheme (e.g. eBanking if you are a bank) for starters. Do not rely on some cookie you try to interpret and then show or don't show content. The security tools at your hand are are ACL and reader fields
  6. Test, Test, Test: You can Usability test, load test, functional test, penetration test, validity test, speed test and unit test. If you don't test, the general public and interested 3rd parties will do that for you. The former leads to bad press, the later to data breaches
  7. Use a responsive layout: Use the IBM OneUI (v3.0 as of this blog date) or Bootstrap (get a nice theme). XPages provides great mobile controls. Using an XPage single page application you can limit the range of allowed URLs to further protect your assets
  8. Code for the most modern browser: use HTML5 and degrade gracefully. So it is not "must look the same in all browsers", but "users must be able to complete tasks in all browsers" - experienced might differ. Take advantage of local cache (use an ETag and all the other tips!)
  9. Use https the very moment a user is known. If in doubt try Firesheep
  10. Of course the Spolsky Test applies here too!
As usual YMMV


Generating Test data

You have seen that over and over: Test1, qwertyyui, asdfgh as data entered in development to test an application. Short of borrowing a copy of production data, having useful test data is a pain in the neck. For my currently limited exposure to development (after all I work as preSales engineer) I use a Java class that helps me to generate random result from a set of selection values. To make this work I used the following libraries:
  • JodaTime: takes the headache out of date calculation
  • gson: save and load JSON, in this case the random data
  • Lorem Ipsum: generate blocks of text that look good
  • Here you go:
    package com.notessensei.randomdata;

    import java.util.Date;
    import java.util.HashMap;
    import java.util.List;
    import java.util.Map;
    import java.util.Random;

    import org.joda.time.DateTime;


    import de.svenjacobs.loremipsum.LoremIpsum;

     * Source of random data to generate test data for anything before use you need
     * either load lists of your data using addRandomStringSource or load a JSON
     * file from a previous run using loadDataFromJson
     * @author NotesSensei

    public class RandomLoader {

         * How often should the random generator try for getRandomString with
         * exclusion before it gives up

        public static final int             MAX_RANDOM_TRIES    = 100;
        private Map<String, List<String>>   randomStrings;
        private Random                      randomGenerator;

         * Initialize all things random

        public RandomLoader() {
            this.randomStrings = new HashMap<String, List<String>>();
            this.randomGenerator = new Random(new Date().getTime());

         * Adds or ammends a collection of values to draw from

        public void addRandomStringSource(String sourceName, List<String> sourceMembers) {
            if (!this.randomStrings.containsKey(sourceName)) {
                this.randomStrings.put(sourceName, sourceMembers);
            } else {
                // We have a list of this name, so we add the values
                List<String> existingList = this.randomStrings.get(sourceName);
                for (String newMember : sourceMembers) {

         * Get rid of a list we don't need anymore
         * @param sourceName

        public void dropRandomStringSource(String sourceName) {
            if (this.randomStrings.containsKey(sourceName)) {

         * Gets a random value from a predefined list

        public String getRandomString(String sourceName) {
            if (this.randomStrings.containsValue(sourceName)) {
                List<String> sourceCollection = this.randomStrings.get(sourceName);
                int whichValue = this.randomGenerator.nextInt(sourceCollection.size());
                return sourceCollection.get(whichValue);
            // If we don't have that list we return the requested list name
            return sourceName;

         * Get a random String, but not the value specified Good for populating
         * travel to (exclude the from) or from/to message pairs etc
         * @param sourceName
         *            from which list
         * @param excludedResult
         *            what not to return
         * @return

        public String getRandomStringButNot(String sourceName, String excludedResult) {
            String result = null;
            for (int i = 0; i < MAX_RANDOM_TRIES; i++) {
                result = this.getRandomString(sourceName);
                if (!result.equals(excludedResult)) {
            return result;

         * For populating whole paragraphs of random text LoremIpsum Style

        public String getRandomParagraph(int numberOfWords) {
            LoremIpsum li = new LoremIpsum();
            return li.getWords(numberOfWords);

         * Get a date in the future

        public Date getFutureDate(Date startDate, int maxDaysDistance) {
            int actualDayDistance = this.randomGenerator.nextInt(maxDaysDistance + 1);
            DateTime jdt = new org.joda.time.DateTime(startDate);
            DateTime jodaResult = jdt.plusDays(actualDayDistance);
            return jodaResult.toDate();

         * Get a date in the past, good for approval simulation

        public Date getPastDate(Date startDate, int maxDaysDistance) {
            int actualDayDistance = this.randomGenerator.nextInt(maxDaysDistance + 1);
            DateTime jdt = new org.joda.time.DateTime(startDate);
            DateTime jodaResult = jdt.minusDays(actualDayDistance);
            return jodaResult.toDate();

         * Lots of applications are about $$ approvals, so we need a generator

        public float getRandomAmount(float minimum, float maximum) {
            // between 0.0 and 1.0F
            float seedValue = this.randomGenerator.nextFloat();
            return (minimum + ((maximum - minimum) * seedValue));

         * Save the random strings to a JSON file for reuse

        public void saveDatatoJson(OutputStream out) {
            Gson gson = new Gson();
            PrintWriter writer = new PrintWriter(out);
            gson.toJson(this, writer);

         * Load a saved JSON file to populate the random strings

        public static RandomLoader loadDataFromJson(InputStream in) {
            InputStreamReader reader = new InputStreamReader(in);
            Gson gson = new Gson();
            RandomLoader result = gson.fromJson(reader, RandomLoader.class);
            return result;
    As usual YMMV


Inbox vs. Stream interaction pattern

A recent Tweet exchange with Alan got me thinking on (inter)action pattern in the collaborative software we use. On one hand we have the incumbent eMail: time tested, loved, loathed and under (so the hype) thread from the new kid on the block: activity streams (a.k.a. river of news), in various technological implementations (from propriety, RSS, ATOM, JSON to I'll compare the two from the perspective of work, where you need to get things done (pun intended). I will use email actions from Lotus Notes as example, available actions in your eMail software might vary. Similar I use IBM Connections 4 as stream example. Here you go:
Purpose eMail Social stream
What's new? Scan inbox, look for unread marks, switch to "unready only" mode Scan stream, memorise where you left off
Read details Click on item (with preview pane) or open it Click on item, then click "show all", "show more", click on right arrow (in Connections 4)
See conversation Click on triangle to expand or use show menu (when eMail open) Click on item, then click "show all", "show more", click on right arrow (in Connections 4)
Reply Click reply (with too many options ) Click comment
Tag a reply Send & File, Categorize Use # in the reply
Indicate that you concur/endorse an item Send a reply Click the like button
Mark as read Automatic when previewing or reading n/a
Associate with something File in folder(s), categorize (yep, that's like tagging, but not shared with others and in Notes since 1.0) Tag
Information not relevant Delete or remove from inbox (my favourite for "might be relevant some other time" since it still shows up in all documents and search) n/a (in Activities there is tune out, or remove watching a specific tag)
Read later Keep unread, file in folder(s) n/a
Associate with a project or a customer File in subfolder(s) of project/customer folder, use custom plug-in for meta data tagging is flat only, harder to find later
Followup action needed Flag for followup, copy into task n/a (the sharebox might remedy that in future)
Add to personal/team knowledge collection Copy into journal or discussion db (there's a plug-in for that) n/a, but you can use the Evernote browser plug-in
Scheduled action needed Copy into new calendar entry n/a
Let other people know Forward Reshare with @Name in the message
Look at specific information like a project, product, customer Open that folder, look at the categorized view, fulltext search Browse for the tag, fulltext search
Suggest filing/tagging destination Use the SwiftFile plug-in Look at other people's tags for the item
Suggest what else is interesting n/a Social analytics
It seems the interaction patterns for actions in the social space need to mature a bit. Interestingly e.g. Google reader has sorted the problem of read/unread for a stream of news (your RSS feeds) by tracking what entry you focus. Might that make sense for an activity stream (show unread only)?


Hide my Ass on Linux

When you travel a lot in places with some habits, you want to keep your internet activity as private as possible (there are other reasons too) - besides making local access harder). One of the VPN services, aptly named after a gray fury animal is Hide my Ass. I like them, since they have both OpenVPN and PPTP as well as provide access points all over the planet and don't charge for switching between them. When overseas I frequently use the Singapore access point to "phone home".
They provide GUI installers for Windows and Mac, but just a command line script for Linux (anyway Linux user live on the command line, don't we?). So I whipped up a small script that provides a minimal GUI (with lots of room for improvement ). Enjoy:
# HMA Dialog using the Zenity dialog
# (CC) 2012 St. Wissel, Creative Commons 2.0 Attribution, Share-Alike

if [[ x$1x == 'xx' ]]
selloc=$(eval zenity --width=640 --height=480 --list --text \"Pick a Location\" --radiolist \
    --column \"\" --column Location $(curl -s |
        sed 's/.*/FALSE "&"/;1s/^FALSE /TRUE /'))

       echo "you selected ${selloc}"

#Now fetch the config
COUNTRY=`echo $selloc | sed 's/ /+/g'`
curl -s "$COUNTRY" > client.cfg
#Finally connect
sudo openvpn --config client.cfg --auth-user-pass secret
The script depends on zenity, but that shouldn't be more than an apt-get or yum away.
As usual: YMMV


Graceful degradation

Web development is a curious thing. We constantly push ourselfs to upgrade skills and capabilities, we learn Dojo, jQuery and HTML5. We make friends with Websockets and Webworkers only to subject our creations to a runtime environment (a.k.a. the browser) we can't predict and that in many cases might not be up to the task.
So our play-it-save strategy is to look for the lowest common denominator a.k.a. only legacy supported functions can be used. This limits what we can deliver and frustrates user and developers (interestingly corporate decisions to inflict this suffering onto colleagues by sticking to old versions go unpunished).
I suggest: NO MORE!
Every organisation today has the latest modern browser actually in use: just grab the iPhone, iPad, Androids from any executive. Those are the platforms to aim for. The right approach going forward (besides responsive design) is:

Graceful Degradation

When you lookup the term in Wikipedia you get redirected to the entry for fault tolerant systems. And this is the exactly the view point you need to take:

A missing html5 capability in a browser is a runtime fault, NOT a system constraint

. The task now is to build your application so it degrades gracefully (to a point). has a nice definition: "Since web browsers have been around as long as the Web, it is possible to have customers viewing your web pages in browsers that are extremely old and missing features of more modern browsers. Graceful degradation is a strategy of handling web page design for different browsers.
A web design that is built to gracefully degrade is intended to be viewed first by the most modern browsers, and then as older, less feature-rich browsers view it it should degrade in a way that is still functional, but with fewer features.
The IBM OneUI is a nice example how this can work. It is still workable, but not pretty, when seen through Lynx, the mother of all browsers - available, maintained and text only until today.
So your new application can be designed with all the shiny features, as long as you include your degradation path. One degradation endpoint can be that little infobox that explains the manual steps or "feature xy (that would be your application feature not a technical thingi) is not supported, sorry". Luckily the big libraries contain already a lot of that degradation code.
Just some thought: HTML5 provides local storage and caching. Why not design the app to take full advantage of that and fall back to loading of every page if that's not available?
The discussion around graceful degradation isn't exactly new, read for yourself: As usual YMMV


Hyperlinks need to live forever - Blog edition

QuickImage Category  
THE bummer mistake in any web revamp is a total disregard for page addresses. The maximum to be found is a nice 404 page with a notice that things have been revamped and the invitation to search. What a waste of human time and disregard for a site's users!
The links to the original page live outside the sites control and Jacob already stated in 1998 Pages need to live forever. So what could you do when swapping blog platforms?
If your new platform runs behind an Apache HTTP server (also known as IHS), there is mod_rewrite that allows you to alter incoming addresses (the old links) into the new destinations based on a pattern match (other http servers have similar functions, but that's a story for another time).
HTTP knows 2 redirection codes:
  • 302 for temporary redirections
  • 301 for permanent ones.
You want to use the later, so at least the search engines update their links.
Now your new URL pattern most likely uses a different structure than the old one, so a simple Regex might not help for that transition. E.g. your existing format might be /myblog.nsf/d6plinks/ABCDEF while the new pattern would be /blog/2001/10/is-this-on.html.
For this case mod_rewrite provides the RewriteMap where you can use your old value (ABCDEF in our case) to find the new URL. Unfortunately mod_rewrite is very close to dark magic. It can be simple from a key/value lookup up to invoking an external program to get the result. For the key/value lookup you need make your key case insensitive, so all the possible case variations work. This is what I figured out:
RewriteEngine on
RewriteMap lowercase int:tolower
RewriteMap blog-map dbm:/var/www/
RewriteRule ^/myblog.nsf/d6plinks/(.*) /blog/${blog-map:${lowercase:$1}} [NC,R=301,L]
Let me pick that into pieces for you:
  1. RewriteEngine on
    This switches the rewrite engine on. It requires that mod_rewrite is loaded (check your documentation for that)
  2. RewriteMap lowercase int:tolower
    This enabled an internal conversion of the incoming string into its lower case format
  3. RewriteMap blog-map dbm:/var/www/
    This defines the actual lookup. The simplest case would be a text file with the key and result in one line separated by a space. However that might not perform well enough for larger numbers of links, so I choose a indexed table format. It is very easy to create, since the tool is included in the Apache install. I generated my translation list as text file and then invoked httxt2dbm -v -i /var/www/blogmap.txt -o /var/www/ and the indexed file is created/updated
  4. RewriteRule ^/myblog.nsf/d6plinks/(.*) /blog/${blog-map:${lowercase:$1}} [NC,R=301,L]
    This is the rewrite rule with a nested set of parameters that first converts the key to lower case and then looks up the new URL. If a key isn't found it redirects to /blog/ which suits my needs, you might want to handle things different.
    In detail:
    1. ^/myblog.nsf/d6plinks/(.*) matches all links inside the d6plinks, the () "captures" ABCEDF (from our example), so it can be used in $1
    2. ${lowercase:$1} converts ABCDEF into abcdef
    3. ${blog-map: ... } finally looks it up in the map file
    4. [NC,R=301,L] are the switched governing the execution of the rewrite rule:
      • NC stands for NoCase. It allow to match /MyBlog.nsf/ /MYBLOG.NSF/ /myblog.NSF/ etc. It doesn't however convert the string
      • R=301 issues a permanent redirect response (default is 302, temporary)
      • L stops the evaluation of further redirection rules
As usual YMMV


IBM Forms 8.0 Workshop - enroll for free!

From my capable colleagues from IBM developer works comes the new IBM Forms 8.0 workshop:

Workshop Abstract

In this workshop, you will learn about the key features provided in IBM Forms version 8 with the focus on the new IBM Forms Experience Builder. Using the IBM Forms Experience Builder you will learn to use the simple web-based user interface  to develop interactive form driven applications, integrate role based security, implement the integrated lightweight routing for approvals and notifications, explore personalized integration with WebSphere Portal, and leverage open standards utilizing REST API services.


Do you need an effective method to capture and validate end user input, in real time, either from the Web or in stand-alone applications?  Are you in an industry that requires an exact match to existing web applications or paper forms to meet government or industry regulations?  Is there a need to upgrade the end user experience to eliminate input errors so that only validated and required data is collected by your application or service?  IBM Forms V8.0 automates forms-based business processes to help improve efficiency, customer service, and time to value making you more responsive to customer and market needs.  IBM Forms V8.0 enables Line-of-Business and IT users to collect data and automate processes via agile web-applications as well as classic document based forms applications in one nicely integrated package which can also be utilized on iOS and Android tablet based devices. A new key feature in IBM Forms version 8.0 introduces IBM Forms Experience Builder which provides the following benefits to line of businesses and IT users:
  • Exceptional web-based data collection applications
  • Easy-to-use, web-driven creation of compelling, data collection user interface
  • Lightweight routing for approvals and notifications
  • CSS-based styling
  • Easy integration with REST Services via REST API
In this workshop, you will learn about the key features provided in IBM Forms version 8 with the focus on the new IBM Forms Experience Builder. Using the IBM Forms Experience Builder you will learn to use the simple web-based user interface  to develop interactive form driven applications, integrate role based security, implement the integrated lightweight routing for approvals and notifications, explore personalized integration with WebSphere Portal, and leverage open standards utilizing REST API services.

Events highlights

This workshop highlights the following technologies and how to develop and deliver a customised web-based forms solution with IBM Forms 8.0 product suite:
IBM Forms Experience Builder adds higher value to web engagements by enabling Business and IT to create solutions for social business and collaboration
  • Strengthen customer relationships with engaging experiences
  • Reduce time to market by deploying complete data capture solutions with routing capabilities
  • Drive operational efficiencies with smart data collection
  • Understanding the value IBM Forms Experience Builder offers to solution developers and services providers
  • Exploring features of the IBM Forms Experience Builder
  • Integrating services that exist with  IBM WebSphere Portal
  • Learning the human centric routing for forms applications provided by the integrated routing and notifications feature
  • Data centric solution for rapid application development and collecting data and handles high volume transactions
IBM Forms Sever, Viewer, and Designer adds higher value to system of records in business processes and improves end-to-end processing.
  • Improve efficiency by automating paper-based processes
  • Reduce transaction times and operating costs
  • Get auditable with digitally signed business records
  • Document centric solution for storing digital signatures in a self contained document
  • Investment protection through tighter integration with IBM portfolio including IBM Customer Experience Suite, IBM WebSphere® Portal, IBM FileNet® Business Process Manager, and IBM Case Manager


Currently this training is available as:
  • self-paced education. Complete the enrollment form. You will receive an enrollment confirmation e-mail. Our Support Team will send you a separate e-mail with details on how to proceed. Participants download the training materials online and reserve a hosted lab image for a week to work through presentations, demonstrations and lab exercises at your own pace.
  • Live event enrolment. To enroll in the classroom format, select the desired date and location. There is no charge to attend. 25 Oct 2012 - 31 Dec 2013 (English)
  • To inquire about this workshop or other ICS workshops send an e-mail to with the following information: Name of workshop, Questions, Work e-mail address
In case you wonder what IBM Forms actually is: it is IBM's implementation of the XForms standard using XFDL (Extensible Form Definition Language) for layout. An IBM Form contains data and layout, so a form will always render in its original format (which is important for legal documents). Furthermore it can be digitally signed with overlapping signatures (basically countersigning of signatures to tamperprove electronic records.


If you think decomposition is from CSI, stop writing code!

I like coaching developers to write better code, but sometimes it is too much. So here it goes:

If you think decomposition is from CSI, STOP writing code!

So, it is off my chest. Simples rule of thumb:
  • if a function doesn't fit on a printed page, it is too long
  • A function does one thing. If you loop through a document (or record) collection call a function with the individual document as parameter
  • Use objects and inheritance with a factory class instead of monster case structures with copy/paste code duplication
  • Refuctoring was meant as a joke
  • The functions that declare a variable clean it up, not the called functions
  • Global variables are evil unless you have 7 good reasons to have them
When you write your code in JavaScript use JSHint (someone port this to SSJS please), when you write Java, look at Crap4J. Unfortunately there is no free one for LotusScript, but you could use Visustin to visualize your functions - 3m image size would definitely be too long.

Want to improve? Attend a computer science 101 class at a good university and some programming lessons. This is the 21st century, it won't cost you (other than your time): Keep in mind: for every function line you write over 100 lines a kitten must die.


How much effort will you spend on old browsers?

The JavaScript demi-god Douglas Crockford is attributed with the statement: "the browser is the most hostile software development environment ever imagined " (I think he made that before mobile phones were around ). The problem is not only that there are different engines for HTML and JavaScript, but the fact, that older browsers are still around. Chrome and Firefox have build-in upgrade enggines and IE9++ looks quite decent. So users could do with a little reminder and encouragement (and if it is just to pick up the torches and pitchforks to march to the IT department).
For the friends of Bootstrap here's a little code snippet that nicely fits just behind the body tag in your HTML page:
  1. <!--[if lt IE 9]>
  2. <div class="alert alert-block alert-error fade in">
  3.    <button type="button" class="close" data-dismiss="alert">&times;</button>
  4.    <h4 class="alert-heading">There is browser trouble ahead!</h4>
  5.     <p>Modern web application require modern browsers.
  6.        Older browsers are insecure and cause needless development effort,
  7.        we rather spend for better functionality. We are sure you understand!</p>
  8.     <p>Update today:
  9.     <a class="btn" href="">Google Chrome</a>,
  10.     <a class="btn" href="">Mozilla Firefox</a>,
  11.     <a class="btn" href="">Apple Safari</a> or
  12.     <a class="btn" href="">Internet Explorer 9++</a></p>
  13. </div>
  14. <![endif]-->
Others take more drastic measures. Luckily IE6 usage is down to 0.4% in the US (and 6% worldwide), now rinse and repeat for IE7, IE8 and old versions of the others


Calendars worked better when they were manual, did they?

Before calendars became electronic, having the right system in place was a signal of professionalism (admittingly abused as status symbols quite often) and calendars were very personal.
At the right level one had access to a personal assistant (the one without the D between the P and the A) who organized and maintained all aspects captured in the calendar. Inquiry of 3rd parties into your calendar was facilitated by a high powered neural network (a.k.a. the human brain) that translated the individual calendar entries into the information density deemed fit for the inquirer. "Information density" also called "information precision" is an interesting concept, that seems hard to translate into time planning software.
The information density decreases with distance to you
Your PA would know how to answer an inquiry about availability depending on your whereabouts, previous commitments and most importantly the relation/distance of the enquirer. It could range from a simple "No, try again another time" to "He's in Beijing, back next week" to "I'll slot a phone conference in for you at 17:00 GMT+8". With the rise of the digital assistants and calendars this flexible response got lost.
The first generation was entirely personal, while contemporary system will give you "available slots" or (if granted) a full detailed view. They still don't tell you where the other person is (you could use Google Latitude, Foursquare etc. for that) or will be (TripIt might be able to tell you).
Since calendars are no longer accessed only by a single person a conflict arises: one one hand we like it simple, on the other hand a lot of contextual information is needed to provide automated access at the right density level. Data protection and privacy concerns complicate matters further.
There are tons of solution attempts around which all fall short of taking information density into account. Some try to offer more than one calendar that you then can share with different people, some use tags, but I have yet to see one that can take an itinerary approach: I'm going on a trip to Orlando (usually in January). This sets timezone and location, but doesn't block time (unless a presence request indicates a different location outside a "reasonable radius"). Then as part of the trip I schedule sessions and meetings (that would block time then).
Short of having my own PA, that's what my calendar should be able to:
  • All the basic functions calendars have today: entries with and without people, repeating entries, reminders etc.
  • Hierarchical entries (the itinerary approach mentioned above)
  • Ability to switch into different timezones without altering the system timezone. Offer a shortcut based on where I am or will be in the day/week I'm looking at
  • Some clever mechanism to qualify entries, so enquiries (free time lookup etc) can return more or less information based on the enquirer (that one is really hard). Why can't a freetime lookup not include: "I need a specific location", "Online", "Phone" as qualifier. This includes what goes into my "public" calendar
  • A mechanism to figure out "What is the best option of the following given slots for the group of attendees" (probably online interactive)
  • The ability to track lead times (if I'm in the office and have a customer meeting at their place, I want the travel time blocked and eventually adjusted to traffic conditions)
  • The ability to plan preparation times when planning a meeting (that's a tricky one too) - so I can more efficiently plan time
  • Configurable meta data, so I can tie related calendar entries to customers, projects, goals etc.
  • Feature to drag task execution on and off the calendar - good for planning longer work (a task can have more than one calendar entry)
  • Ability to see public calendars on/off in my calendar in groups. Currently I need to switch them on/off one by one
  • more stuff I will think of, when working with the calendar again
Of course, your style would be completely different, so my wishlist wouldn't fit yours. Would it?


Some things don't change - what makes a good web appication

Today we have mobile, jQuery, Dojo, Meteor, HTML5, CommonJS, NodeJS, XPages and many other tools, however the fundamentals for a good web application I worked out 12 years ago are still the same:
What makes a good web appication
We just need to keep them in mind.


OAuth, HTTP and file size limitations

In the brave new world of social file sharing HTTP(s) has won. From the humble webDAV specification to Sharepoint, IBM Connections, Dropbox, UbuntuOne or the emerging industry standard CMIS all use HTTPs to access files on the backends. Since HTTP(s) is the first thing that is available when a network connection is possible and quite often (especially in public hotspots) the only thing available, this success isn't surprising
The more venerable protocols like CIFS (a.k.a. SMB), NFS or SSHFS didn't stand a chance since (rightly?) security experts block them on the corporate firewalls to prevent data leakages.
A lot of times the HTTP integration uses basic authentication, that is hazardous on HTTP, but OK on HTTPs. However providing applications with username and password makes it an update nightmare. Therefore OAuth became rapidly popular. But every fix for a problem comes with its own challenges. The challenge here is OAuth session expiry. While this is hardly an issue getting your latest tweets (140 char transmit in less that 30sec if if you only have 10 Byte/sec), is is an issue for large files.
An open bug in UbuntuOne explains it nicely: "OAuth headers used to check the validity of the request contain the timestamp of the request to prevent reply attacks .... for requests taking less than 15 minutes (the default for oauth in updown). ". If 15 min is the default you need a lot of bandwidth depending on your file size:
  • 9 kb/sec for 1 MB
  • 217 kb/sec for 25MB
  • 870 kb/sec for 100 MB
(That's effective bandwidth, not advertised one). Of course: you don't want to wait 15 minutes for a file, so your real bandwidth requirement might be actually much higher. And that's also the reason why online access to file sharing is nothing more than a band-aid, sync is the way to go.


How much abstraction is healthy for a schema/data model? - Part 2

In Part 1 I discussed Elements vs. Attributes and the document nature of business vs. the table nature of RDBMS. In this installment I'd like to shed some light on abstraction levels.
I'll be using a more interesting examples than CRM: a court/case management system. When I was at lawschool one of my professors asked me to look out of the window and tell him what I see. So I replied: "Cars and roads, a park with trees, building and people entering them and leaving and so on". "Wrong!" he replied, "you see subjects and objects".
From a legal view you can classify everything like that: subjects are actors on rights, while objects are attached to rights.
Interestingly in object oriented languages like Java or C# you find a similar "final" abstraction where everything is a object that can be acted upon by calling its methods.
In data modeling the challenge is to find the right level of abstraction: to low and you duplicate information, to high and a system becomes hard to grasp and maintain.
Lets look at some examples. In a court you might be able to file a civil, criminal, administrative or inheritance case. Each filing consists of a number of documents. So when collecting the paper when doing your contextual enquiry you end up with draft 1:
    <civilcase id="ci-123"> ... </civilcase>
    <criminalcase id="cr-123"> ... </criminalcase>
    <admincase id="ad-123"> ... </admincase>
    <inheritcase id="in-123"> ... </inheritcase>
(I'll talk about the inner elements later) The content will be most likely very similar with plaintiff and defendant and the representing lawyers etc. So you end up writing a lot of duplicate definitions. And you need to add a complete new definition (and update your software) when the court adds "trade disputes" and, after the V landed, "alien matters" to the jurisdiction.
Of course keeping the definitions separate has the advantage that you can be much more prescriptive. E.g. in a criminal case you could have an element "maximum-penalty" while in a civil case you would use "damages-thought". This makes data modeling as much a science as an art.
To confuse matters more for the beginner: You can mix schemata, so you can mix-in the specialised information in a more generalised base schema. IBM uses the approach for IBM Connections where the general base schema is ATOM and missing elements and attributes are mixed in in a Connections specific schema.
You find a similar approach in MS-Sharepoint where a Sharepoint payload is wrapped into 2 layers of open standards: ATOM and OData (to be propriety at the very end).
When we abstract the case schema we would probably use something like:
   <case id="ci-123" type="civil"> ... </case>
   <case id="cr-123" type="criminal"> ... </case>
   <case id="ad-123" type="admin"> ... </case>
   <case id="in-123" type="inherit"> ... </case>
A little "fallacy" here: in the id field the case type is duplicated. While this not in conformance with "the pure teachings" is is a practical compromise. In real live the case ID will be used as an isolated identifier "outside" of IT. Typically we find encoded information like year, type, running number, chamber etc.
One could argue, a case just being a specific document and push for further abstraction. Also any information inside could be expressed as an abstract item:
<document type="case" subtype="civil" id="ci-123">
    <content name="plaintiff" type="person">Peter Pan</content>
    <content name="defendant" type="person">Captain Hook</content>
Looks familiar? Presuming you could have more that one plaintiff you could write:
<document form="civilcase">
    <noteinfo unid="AA12469B4BFC2099852567AE0055123F">
    <item name="plaintiff">
        <text>Peter Pan</text>
    <item name="defendant">
        <text>Captain Hook</text>
Yep - good ol' DXL! While this is a good format for a generalised information management system, it is IMHO to abstract for your use case. When you create forms and views, you actually demonstrate the intend to specialise. The beauty here: the general format of your persistence layer won't get into the way when you modify your application layer.
Of course this flexibility requires a little more care to make your application easy to understand for the next developer. Back to our example, time to peek inside. How should the content be structured there?


How much abstraction is healthy for a schema/data model? - Part 1

When engaged in the art of Data modelling everybody faces the challenge to find the right level of abstraction. I find that challenge quite intriguing. Nobody would create an attribute "Baker", "Lawyer", "Farmer" in a CRM system today, but one "profession" that can hold any of these professions as value. A higher level of abstraction would be to have attribute value pairs. So instead of "Profession" - "Baker" it would be "Attribute: Name=Profession, Value=Baker". Such constructs have the advantage to be very flexible, without changing the schema all sorts of different attributes can be captured. However they make validation more difficult: are all mandatory attributes present, are only allowed attributes present and do all attributes have values in the prescribed range?
Very often the data models are designed around the planned storage in an RDBMS. This conveniently overlooks that data modelling knows more approaches than just a physical data model and ER-diagrams. Tabular data in real life are confined to accountants' ledgers, while most of the rest are objects, documents and subjects (people, legal entities, automated systems - data actors so to speak) with attributes, components (sub-entries) and linear or hierarchical relations. Also exchanging data requires complete and integer data, which lends itself rather to the document than the table approach (putting on my flame proof underwear now).
In a RDBMS the attribute table would be a child table to the (people) master table with the special challenge how to find an unique key that survives an export/import operation.
This is the reason why a XML Schema seems to be the reasonable starting point to model the master data model for your application. Thus a worthwhile skill is to master XML Schema (It also helps to have a good schema editor. I'm - for many years- using oXygen XML).
This won't stop you to still use a RDBMS to persist (and normalize) data, but the ER-Schema wouldn't take centre stage anymore. Of course modern or fancy or venerable databases can deal with the document tree nature of XML quite well. I fancy DB/2's PureXML capabilities quite a bit. But back to to XML Schema (similar considerations apply to JSON which is lacking a schema language yet - work is ongoing).
Since XML knows elements (the stuff in opening and closing brackets, that can be nested into each other) and attributes (which are name/value pairs living inside the brackets) there are many variations to model a logical data entry. A few rules (fully documented in the official XML specifications) need to be kept in mind:
  • Element names can't contain fancy characters or spaces
  • Elements can, but don't need to have content
  • Elements can have other elements, text or CDATA (for fancy content) as children
  • Elements can, but don't need to have attributes
  • Elements can't start with "xml"
  • Attribute names can't contain fancy characters or spaces
  • Attribute values can't contain fancy characters. If there they need to be encoded
  • Attributes should only exist once in an element
  • Attributes must have a value (it can be empty), but can't have children
There are different "design camps" out there. Some dislike the use of attributes at all, others try to minimize the use of elements, but as usual the practicality lies in the middle. So we have 2 dimensions to look at: Element/Attribute use in a tree structure and secondly the level of abstraction. Lets look at some samples (and their query expressions):
    <name>John Doe</name>
    <remarks>John loves roses and usually speaks to Jenny</remarks>


The Vulcan UI vision

By now everybody in the little yellow bubble should have seen the Project Vulcan Demo:

But how would you explain it to a non-technical user? I use the following little story:
Just imagine one of your favorite artists performs at The Esplanade and you invested into some premium tickets. When you arrive a purser doesn't ask you for your ticket class, but asks: "How did you come here?" When you look astonished, he will continue: "If you came by car, please use entry A. If you came by bus, entry B is for you. Taxi passenger use entry C, while pedestrians please use entry D" - and these are the entries to the concert hall, not the building.
Silly isn't it? But that's exactly what we do with computers today: "If you arrived via SMTP, please show up in eMail. If you arrived via XMPP, show up in the chat client. If you arrived via RSS/ATOM please surface in the feed reader, etc.". Vulcan will put an end to this. First it will unify the application access and then arrange content by your criteria (from whom, what content, what priority, what project etc.) rather than distribution channel (which will be just one option).
Feel free to expand, remix and reuse this little story (and go buy some tickets)


What trains and servers have in common - bandwidth

Practising deduction and investigation with my kids I raised the question: "Does a MRT station need more entry or exit gates?" (MRT is Singapore's local train/subway operator). So we came up with:
  • People reach the station at their own time, but together in a train, so much less people will at any given time want to enter concurrently than want to leave concurrently
  • unless of course the station caters to an event, where visitors leave at the end and want to enter the station
  • Depending on the time of day streams might be reversed. From 8am to 9am everybody wants to leave at the Central Business District stations, while 17:00 onwards they want to enter
  • Depending on the frequency of trains and the gate capacity you need more holding area in the station. If you increase the frequency of the trains too much you won't get enough people through the gates in time to fill them
  • How fast can one individual pass the gate (a.k.a access latency)?
MRT solved that dilemma by having gates that can be switched on demand and once the ticket presentation at the gate (we use contactless tickets) exceeds a certain frequency the gates switch to an "stay open" mode which allows passing through at almost walking speed. In Munich there are no gates, you even could run in, but that invites to cheating and the city of Munich has a team of people randomly checking tickets on trains which always feels akward.
CPU power growth has exceeded I/O growth /
Similar considerations are needed in server design:
  • Like in a train system (where you have: train capacity, train frequency, station capacity, gate/platform thruput, access path capacity etc.) a server system has CPU speed, CPU throughput, Cache capacity, storage throughput (commonly referred to as storage I/O) and network throughput (using Ethernet and its CSMA/CD protocol throughput declines with number of network participants, this is why we have all these expansive switches)
  • Increasing the memory bandwidth by moving from 32 to 64 Bit doesn't make a system faster (your car doesn't get faster because your lanes widen from 2 to 4), but more able to handle higher demands e.g. with concurrent access or larger reads. It is like switching from a 2.8T lorry (truck for the American readers) to a 8T truck. Both are (in Germany by law) limited to a top speed of 80 km/h, but when you move the later does only need to make the distance once. Of course if your system was memory-bus or memory challenged in the first place 64Bit will help instantly
  • Your CPU gets pretty bored (trains stay empty) if you can't get data in and out fast enough. What I see a lot are blade centers, virtual machine clusters or private clouds that have abundant CPU, so-so network bandwidth and (almost criminally) lacking storage I/O capabilities (not this one). This might relate to past memories where CPU tended to be the bottleneck (often today when a CPU maxes out it is due to I/O wait cycles)
  • Data in and out both covers storage as well as network access. And the question How much bandwidth does a Domino server need? still missed the part "for this amount and type of payload and that response time expectation". If your network bandwidth doesn't scale, you can still change strategy
In conclusion: Your first order when planning a server, a virtual environment or a cloud:
Take care of your I/O!

As usual YMMV


From Data to Knowledge

Wonderful infographic found on
: data cake
Now where is that recipe


Backward compatibility - damn if you do, damn if you don't

My blog entry When Sofware matures you need to cut development to stay profitable, isn't it? stirred more feedback than the elaborate comment from Nathan and Phil. So there are much more thought to share when looking at backward compatibility. In his famous 2004 article How Microsoft lost the API war Joel Spolski explained using Microsoft as example how two camps, which he labeled The Raymond Chen Camp and The MSDN Magazine Camp are at odds with each other regarding backwards compatibility. Go read the entry, I'll wait for you. Joel and Nathan rightly point out, that backward compatibility is often taken for granted and is increasingly expensive (Apple worked around that using Rosetta, but from a way smaller base).
So what happens if you don't pay attention to backwards compatibility: customers will be very reluctant to upgrade since applications (especially custom applications) need to be upgraded and your version upgrades become a mayor exercise that gets cut once a customer hits rough water. In the end your speed of innovation slows down and you need to support a lot of old versions. What happens if you do: Nathan outlined that already.
So damn if you do, damn if you don't!
As the going joke goes: "God would not have been able to create the world in seven days if there would have been an installed base to take care of". The biggest challenge I see is the incredible creativity how platform APIs are used (and undocumented functions are handed from developer to developer as secret tip - you are lame if your program doesn't use any) and abused. Part of my job is to look at existing applications and I've seen a lot of code that made me cry (and laugh: like the Chinese application where everything was Chinese including field names and LotusScript variable names - yes you can do that in LotusScript). So what's the way out? Eventually we need to borrow a page from Wiliam Goldman. In his novel The Princess Bride he retells an abridged version of an older story, "just the good parts". So how would that look like for software:
  • Make upgrade management part of the software. For Domino it is a joke that updates and patches need to be downloaded manually and distributed manually. The Linux packet managers solved that problem. I would expect a page in Domino admin to allow to do that. Eclipse update can update clients automagically, so we could use that
  • Provide analysis tools. Teamstudio has the Upgrade Filters, but they don't go far enough. Well behaved code usually doesn't pose a problem when upgrading, the code I've seen will (think fragile). For browser JavaScript there is JSLint (its original tag line was JSLint will hurt your feelings), for Java there is Lint4J and Crap4J. We need such a tool for LotusScript, so overuse of client events, unencapsulated OS calls and missing Option Declare statements don't go unpunished anymore (because you can do something doesn't mean you should. Don't believe me? Try to poke yourself in the eye - you can do that). So where's the LSLint and the @Lint?
  • A upgrade pilot would show functionality that is depricated in this version and will be gone in the next. It would make a nice RTC plug-in.
  • A data conversion sandbox. If data in an older format is found it is put into the sandbox and can only graduate after update
  • ... insert more witty ideas here ...
I'd really like to see the lints.


When Sofware matures you need to cut development to stay profitable, isn't it?

Besides having Fun with Dueck I discover more and more how prevalent system patterns are in business. My special foes are "Quick Win|Fix|Start" which are strong indicators for a Shifting the burden archetype at work (remember the sales cycle). Software is a very profitable business that scales very well (your marginal cost to create another software license to be sellable is practically zero - don't confuse that with "cost of sales"). Nevertheless it is also the playground for the burden shifting pattern. Capitalistic theory demands that high profit margins attract competitors thus reducing the price a vendor can command until the profit margins aren't higher than in the general economy. Of course market incumbents try to raise entry barriers to prevent such competition (and when they overdo that, they get investigated, sued, convicted and fined). So the high profit margins require constant attention since the competition is closer than they appear. But what should a company do when competing in a mature market? Ask your average MBA: cut cost of course. So very successful businesses do that? Like the insurance industry in Singapore (which managed to sell more insurance per household than anywhere else)? Nope: no cut in staff training, no cut in the work force, but more incentives. What happens when cost are cut to fix short term profitability?
Shifting the Development Burden kills your product
Cutting back on R&D will improve the bottom line when the cut is made. But it will also slow down product improvements. The slowdown not only stems from a reduced team size, but also from becoming preoccupied with ones own survival (will I be the next to be cut?). Once the basic security is gone, the readiness for disruptive innovation disappears making the vendor even more vulnerable to its competitors' assaults. In result the pain of the symptom overshadows the root cause and more cuts are made. Go and visit the ERP Graveyard to get an impression for just one software category. Especially in a time of sliding revenues taking a controlled risk could revitalise your product unless you are chronically risk averse. And don't rely on a customer council (you get faster horses then) but on your ability to innovate. Unfortunately the cost cutting just removed the necessary funds and, more importantly, the personal security that leads to innovation.


Taking hints from each other

In fashion there is no copyright, which seem not an issue in that industry. They make good money:
Gross Sales Of Goods by IP protection
In software even the look and feel is jealously guarded by all parties involved even if it wasn't their original design. However there is only so much you can do and elements start looking similar. Black top bars in web applications seem to be the latest trend.
Taking Hints From Each Other
Now taking hints from each other is a big no-no in hardware design, of course unless your hardware happens to be a car, a suit or some furniture. Should we take some more hints from Johanna Blakely?


How eMail encryption works and why it is an utter public failure

You have a message for someone that is both confidential and sensitive, you want to ensure that onl the receipent can open it and that (s)he also gets reassurance that it actually was you who sent this and that the message has not been tampered with. Enter the world of digital encryption and digital signatures.
Notes users are used to just checking the respective check box and their messages get signed and encrypted without any further action required. When sending messages to external parties this is usually is accompanied by error/warning messages that this won't work. So it is a fault in Lotus Notes (usual business practise: if anything doesn't work, first blame Notes)? Far from that. To be able to send an encrypted message one needs to have access to some information about the recipients which today is neither publicly available nor easy to obtain. Lets have a look how exactly signature and encryption are working.
They both depend on the availability of a key pair to operate. So the first step is the creation of a public/private key pair by a certificate authority (CA). For other platforms setting up a CA is quite a task, you can check some public sources and explanations for more details on that. In Lotus Notes that key pair is created by the Domino Administrator when registering new users (using Domino's CA process and kept save in Domino's ID Vault). Public/Private key pairs are very interesting constructs. Using advanced mathematics two strings are created with some interesting properties:
  • You can't (at least not with timely effort) compute one of the keys out of the other
  • When you encrypt something with your private key anybody holding the public key can decrypt and read it. This is useful to check if something really came from you without alterations
  • When someone encrypts something with your public key only you (the owner of the private key) can decrypt and read it. This is useful to transmit confidential information
Creation of a public/private Key pair
So in an ideal world anybody's public key would be accessible for anybody. In Notes the Public Key is saved into the Domino Directory on creation of the ID file. So it is convenient to use. In public the big certificate authorities like Thatwe or Verisign offer query possibilities for their database. Unfortunately there is no prevalent standard nor any of the big email programs or services offers easy access to all of this (I happily stand corrected). LDAP could be used for the task, but I yet see a public service there. I would expect something like a security service that knows where to look for keys and imports them as needed. Fiddling with PKS files is out of question. The complexity and inconvenience of managing your key is IMHO one of the big failures of the IT industry. So millions of sensitive messages go through the internet every single day (I would sniff out eMails from/to private banks for starters). How does it exactly work?
  1. Encrypting:
    An encrypted message can only be read by the intended recipient
    Steps to encrypt a message
    When the sender wants to encrypt a message the email client requests the public key. That key can either be stored in the user's address book or be provided by the directory service. In a corporate environment the availability of a public key is the default for Lotus Notes. Other email systems need extra configuration and key-generation work. Once the key is retrieved the message is encrypted with this key and can only be decrypted with the private key of the recipient. Once the recipient receives the message her private key, stored in the for Lotus Notes or the keystore for other applications, decrypts the messages and displays it. EMail encryption only encrypts the body of the message (including attachments), but not the subject line or the from/to fields.
  2. Electronic Signature
    An electronic signature verifies the authenticity of the message content. The message itself is not encrypted when signed. Signatures can be used together with encryption. Used together signature comes first.
    Steps to sign a message
    The signing process computes a check sum (e.g. a MD5 hash value) from the body of the message. The resulting string gets encrypted with the public key of the sender. A recipient executes the same check sum computation and then uses the public key of the sender to decrypt the original result. When the two values match the recipient has the confirmation that the specific message really comes from that sender without any alterations
What would need to happen, so encryption would be available more easily? (Why that doesn't happen makes a field day for interested parties):
  • Addition of a new record type to the DNS. It could be an ENC (for encryption) record type or a convention to use a TXT record like the SPF framework. It would point to one or more LDAP servers that can provide Public keys for the Domain of a recipient
  • The naming standard for LDAP's X509 attributes gets implemented in an interoperable way
  • eMail clients and eMail services would use the new DNS and LDAP entries to retrieve public keys when encryption is requested by a user or a signature needs to be verified. Of course some caching and deferred operation capabilities need to be build into the clients
  • GMail, Yahoo mail and the big others offer certificates to users
  • The other corporate eMail servers implement less painful certificate management
I doubt we see any of this anytime soon. After all the spooks don't want to spend all their CPU horsepower on regular citizens' eMail.


It isn't a standard if it isn't broken. Today: webDAV

In my last life I worked on a Tomcat servlet that allowed to access Domino resources using webDAV. With XPages and the arrival of a decent J2EE stack on Domino I saw a great opportunity to fold that servlet into an XPages extension. This proved to be rather easy. One still extends javax.servlet.http.HttpServlet and adds a plugin.xml that can be as easy as
<?xml version="1.0" encoding="UTF-8"?>
<?eclipse version="3.4"?>
   <extension point="org.eclipse.equinox.http.registry.servlets">
         <servlet alias="/files" class="" />
Why webDAV and not something more modern like CMIS? Simple answer: lack of File system support for the later. CIMS requires a plug-in on a client machine to make a CIMS server look like a file system, I'm not aware of any file system plug-ins available yet. Libraries or application - yes, but nothing is visible on the file system. webDAV on the other hand has been supported on Mac, Linux, Unix and Windows for a long time efficiently. So I thought. Having mostly Mac and Linux at home my tests looked promising. However Microsoft broke the webDAV redirector and the web folders in Windows. This is quite surprising since webDAV was/is in use in both Microsoft Exchange as well as in Microsoft Sharepoint (via IIS) but not without trouble. While poking around I learned a thing or two:
  • webDAV works as designed on Linux and Mac. It needs special care on Windows
  • For Vista and XP you need to install a patch provided by Microsoft
  • There is nice summary how to connect on who also make tools to get the Windows webDAV experience up to par with the Linux or Mac functionality
  • Windows XP has trouble to connect to anything but port 80 and you still want to include :80 in the connection URL. SSL without the patch won't work
  • Windows 7 tries to use digest authentication by default. Big issue for Domino unless you use Puakma SSO or PistolStar
  • Basic authentication is off by default, but might be on for SSL if you have Sharepoint installed. Using a DWORD registry key you can change the behaviour: 0=off, 1=on for https, 2=on for http/https HKEY_LOCAL_MACHINE \SYSTEM \CurrentControlSet \services \WebClient \Parameters \BasicAuthLevel for Windows 7 and Vista. Vista still might screw it up. For Windows XP and Windows Server 2003 use HKEY_LOCAL_MACHINE \SYSTEM \CurrentControlSet \Services \WebClient \Parameters \UseBasicAuth
  • Explorer tries in every folder to read the desktop.ini file. So you either end up with a lot of 404 errors in your log or provide one for your folders which you can customise
As usual YMMV


Singapore Airlines needs a little QA on their website

I love my national airline. The planes are modern, the staff friendly and competent, the flights are on schedule. I can't say that about online experiences. The recent revamp of their website deemed me as odd and it took a while before I could pinpoint what irked me:
3 different font styles
While de gustibus non est disputandum mixing 3 font styles just in the header of a page makes it look odd. Also having entry fields styled in a italic serif font violates every single web experience and makes reading the content, especially on small devices, unnecessary hard. But well... I always could overwrite the style sheet. But that wasn't the only problem. I tried to update my profile. My street address contains a # sign. This is quite prevalent in Singapore since addresses are written as Block/Building #Floor-Unit number. This was what I had in my profile. But when I updated a different item I get the error that # isn't allowed in the address --- So extisting data suddenly became invalid. Next issue I encountered was the so useful error message:
Error messages should be clear
"The written language is not valid". Which of the two? And it was a selection from a dropdown, why does the form offer invalid entries? What shall I do?
So I dutifully filled in a feedback form to get insult added to injury in the auto-reply:
"This is an automated acknowledgment to inform that we are experiencing high feedback volumes related to the launch of our new website. We apologise that we are not able to respond to queries or feedback related to our new website at this stage.
For internet check-in issues, we request that you try again later. Online check-in is available up to 2 hours before your flight's departure. Alternatively, you may also check in using your mobile device or proceed directly to check in at the airport. Please click the following link to access SIA Mobile:
There seems to be some quality gap between IT and flight operations.


Going social below the radar - will you come along?

This blog entry is based on my colleague Benedikt Müller's German Blog entry "Enterprise Social Software mit Guerilla-Taktik". It is a mix of translation, transcript and reflection. Here you go:

A few weeks ago Novell released their new platform for collaboration platform Vibe Cloud. VibeCloud is the phoenix from the ashes of Google Wave in an enterprise flavour. This cloud service claims to improve collaboration inside enterprises without the need to invest in on-premise or hosted own infrastructure. To drive adoption Novell uses the same interesting approach that turned into remarkable success story for Yammer, the innovative corporate micro blogging service (I always cringe on the word "micro blog", it is like "adhesive tape" - anybody would just tweet ask for a Scotch tape - or Tesa film in Germany): Yammer found their audience by creating a closed network based on the subscribers eMail Domain. Anybody can register on their website and is added to a network containing all users with the same email domain. The first user of any given domain kickstarts the network. Click on register now and the corporate social network takes flight.

Yammer and now Novell Vibe Cloud empower individual employees to introduce their services bypassing any internal processes and approvals. From the vendors point of view that constitutes an ingenious route to market. Employees love it for simplicity and speed. For the management and the IT departments however this is a nightmare coming true: loss of control and escalation of risk (not that I'm alleging that there are control freaks running IT or management, judge for yourself). How would such a stealth introduction unfold? Here's one typical sequence of events:

Frustrated bbeing limited to eMail as single collaboration tool in their corporations employees start to search for alternate approaches to improve collaboration with their peers. They are empowered by their private experience with Social Software in the Internet: Facebook and Twitter keep them up-to-date with their social sphere, file sharing is a snap using Dropbox or Ubuntu One and update shared documents in Google Docs. These applications set the benchmark any corporate solution will be measured against by users. Once they discover similar tools tailored for corporate use, which on top can be used by simply providing their eMail address, the flood gates are open for a rapid uncontrolled (and potentially undiscovered - until the CEO enters his eMail out of curiosity) proliferation inside the organisation. What happens next, after all it is corporate use, is the storage of internal and confidential information on the servers of these services: project discussions, customer related documents, draft presentations etc. are stored, shared and worked on.

After a short period of time a lot of internal corporate data gets stored with a vendor that hasn't been evaluated by the IT department and (IMHO much more critical) who has no contract and thus contractual obligation with the corporation. Once the user base has grown sufficiently large, management and IT can't block or discontinue the service without risking to be confronted with torches and pitchforks. In such cases a company is forced to upgrade to the commercial, paid for, service variation to gain access to control and security functionality (anyway: water flows downhill and simply find another way).

Capgemini adopted Yammer exactly in the sequence described above. Once the accelerating proliferation had been recognised, Capgemini decided to tolerate the new communication channel. Benedikt stated his support for this move, since stemming against the dynamic of this move would prove to be to difficult. I haven't made up my mind, but do agree with Benedikt, that communication dynamics need to be taken advantage of, moderated and empowered. Trying to stem or surpress them won't work. Since the data is stored outside the corporation, the sharing of customer related or internal information has been outlawed for Capgemini on Yammer. This restriction severely limits the usefulness - one can't share project related information or even what customer they are currently with.

Benedikt draws two conclusion (mine follow thereafter):
  • Corporations need to take their employees' needs and wants regarding modern communication and collaboration serious. Otherwise there is the risk (I would say: the certainty) that staff simply utilise consumer tools like Google Docs and Dropbox or "fly below the radar" introducing unchecked services like Yammer or Novell Vibe Cloud
  • To create real value in corporations Social Software must encompass collaborating using internal, confidential or even secret information. That works reliable with the cloud offering of a trusted partner. In larger organisations however the prefered approach still seem to be making these services available on-premises leveraging their existing data centre
  • The need for communication and collaboration will always trump the aspirations to control and prevent. Social Software is happening now, it is the management do decide how much guidance and influence they want to exercise
  • Simplicity isn't simple. The pervasiveness of eMail is (besides the fiction of ownership - MY inbox) rooted in its universality. I have one place to communicate internally and externally. If suddenly communication affords different tools for different communication (like Twitter to the outside, Yammer to the inside) adoption is impacted. Of course you could use Wildfire in your Lotus Notes 8.5 sidebar as single update location. The same sidebar that hosts IBM Activities that you can share inside and outside your organisation
  • Cross-corporate collaboration hasn't been sorted out yet. LotusLive's guest mode or IBM's public Sametime servers are a start, but compared to eMail it is just in its infancy


eMail migration - what do you do with the legacy?

Where do you put your old eMails Setting up an eMail server or signing up for a cloud based offering is very straight forward. Mastering the trade takes a little longer. However moving toward a new platform from wherever you are, is not so crystal clear. There have been various studies about migration cost, One of them puts the budget per user for eMail migration in the range of USD 200 (plus opportunity cost for training and lost productivity). A big slice of that cost is for moving historical eMails across to that new platform. There are a number of approaches to deal with the old eMails:
  • Go for amnesia and leave the past behind. It worked for the White House, it could work for you. Biggest drawback: you are in breach of several binding regulations and others can take you for a ride (an eMail always has at least two ends). Advantage: clear mind
  • Retain the previous eMail client to lookup historic records. It is only temporary since most jurisdictions allow to delete business records after 5-7 years (and a VM Image can preserve that old OS too)
  • Print all the record you want to keep. "Print" would include the creation of PDF files. Be clear about the fact that you will be most likely in breach of various electronic transaction acts when you only take the default printout that omits the header (transmission) information. So you might print items you want to keep only after a retention mandate has expired
  • Export eMails into a vendor neutral format (that would be MIME). You need to have a good way to put these files (one per eMail) into a useful storage (same problem applies to the PDF files). A file system might not qualify as such
  • Use an eMail platform neutral Enterprise archive to keep all your messages (works well with smart eMail life cycle management). The clear advantage here: the enterprise archive is a necessity regardless of your eMail strategy (stay where you are or depart to greener pastures) and can archive files as well. Usually it is a big ticket item and your storage vendor will love you
  • Finally: migrate your data to the new email platform
The big question here: is archival an user or an admin responsibility? And what does your legal counsel say about email retention laws (Keep in mind: based on the content an eMail can also qualify as a business record with its own set of legal constraints)?
As usual YMMV


The Network vs. the Tree

When I started to use Lotus Notes in version 2.1 (thanks to these guys) my primary interest wasn't to learn a new technology (I consider learning new technologies as icing on the cake), but to find a suitable tool to manage semi structured information. At that time computers mostly dealt with structured data or individual storage for pre-print artifacts (today known as Office documents). My main interests were and still are Knowledge Management (KM) and eLearning which IMHO are just different stages of the same thing: acquisition, provision and retention of capabilities.
Corporate HierarchyThe trickiest problem in KM and to a large extend in eLearning is the classification of items. Taking a hint from classical science the first approach was to use a Taxonomy to build a tree of classification. Classification tree are well established and deeply entrenched in corporate hierarchies: a human is a hominid is a primate is a mammal is a vertebrate is a animal from the realm of living beings. Tom is an engineer who works for Frank who is a team leader who reports to Sue who is a development manager working for Cloe who is head of development reporting to Steve who is CTO reporting to Annabel who is CIO reporting to CEO Jack and the board. Somehow it didn't work. The going joke is: "If you want to get rid of job competitors internally, make sure they sit in the Taxonomy committee, that will tie them up and frustrate them down." Truth is: not everything fits into an hierarchy and agreeing on a term as the single permissible label for an item is a pipe dream (and what you would have to smoke in that pipe would be illegal in most jurisdictions). Especially with the rise of "PC" a committee might come to the compromise to call something "a human muscular traditional digging device" while mentally sane people will insist to "Call a spade a spade".
Network by taggingThe rise of social computing with sites like Delicious or Digg added a new quality to classification attempts: tagging. With tagging suddenly naming something was given to all individual users rather than the "Committee of the final truth". Moreover items can be classified in any way thinkable and spade and classical digging device can coexist. Counting the occurrences a tag was associated with a term the "majority vote" or "common name" can be established without ditching the minority opinions. While it sounds messy it works well in practise and gets rapidly adopted in corporate social software.
Besides classification by gravity tagging added additional meta information: when was it added, who added it, how popular is it. Especially the "who" seems to be an important factor. With the constant overload of information it becomes increasingly hard to check all the facts, so trustworthiness of the source, even if it is just the classification, becomes more and more important. So every tag associated with an item in fact is a linking vector with all these attributes, the tag value being just one of them. Ironically predating the tagging breakthrough we already had a standard to exactly do that: XLink.
Unfortunately no gain comes without collateral damage and the flat tagging got rid of marking "the official term" as well as the context covered by a taxonomy. When you see a tag "bank", what does it mean? A turning manoeuvrer of a plane, the edge of a river, the expression of trust (I bank on you) or a form of financial institution?
Delicious took an interesting approach by forming hierarchies out of the tags provided which leads to a huge number of permutations when the tag number increases - and not all make sense. Of course the question is: do the nonsensical matter since no one will ever follow them? Recognising that the core value of a tag lies in its links let to tools tools like The Brain, that allow you to link facts by simple dragging a line or pressing a button. The tag becomes a member of the information repository in its own right. Unfortunately the links don't carry the information "why" they exist ("is a", "contains", "runs", "owns" etc.). It will be interesting to see how the brain will adopt to collaborative linking needs.
The concept of trust was further developed by features like the Facebook like button or the voting capabilities in sites like StackOverflow or IdeaJam. It all reminds me of the ancient Germanic court room principle of proving plaintiff's trustworthiness rather than looking at facts. There are services that want to help to establish trustworthiness for URLs. All these attempts of classification have their merits, what is lacking is a unified field theory for classifications.
Spheres of Influence
How to weight expert classifications (there is usually more than one, e.g. check for that really dangerous Dihydrogen monoxide), especially when they are unpopular, vs public opinion? How to quantify trust in your social graph (you would blindly follow Joe's music recommendations, but never ever let him near a kitchen to make food)?
So KM practitioners around the world have much to muse about. The key questions are still open: how to provide accurate, current, relevant and accessible know-how.


LotusLive Symphony, Google Docs, Office 365

In school my gentlemen are supposed to use Google Docs, I'm testing LotusLive Symphony at work and Microsoft just started Office 365. I will not (at least for now) compare features and functionality here. So all the large vendors seem to believe that "Cloud Office" is ready for prime time. It certainly highlights some trends and changes in our work culture. Here are my observations:
  • The browser as primary interface is dominating (this is NOT about where the data comes from). While some cling to specialised eMail clients, the reality is HTML (I still would make the case for Lotus Expeditor but mostly to join HTML interface snippets that come from different places). Spolsky was right about The API War (and that is 7 years ago)
  • We hardly write documents for print anymore (the only thing I print are routing slips for travel claims), so the notion of a page becomes less and less important
  • The online editors take a "good enough" feature set approach which seems good enough for me (YMMV). Anyway my future favorite HTML based editors have different use and purposes
  • They solve a very old problem that was highlighted first in About Face (the first book, out of print for a long time): Users don't want to bother about saving and locations. It would be subject to a comparison to see how the contestants fare.
  • Since the output format is more likely electronic, print formatting options aren't that important anymore, but the capability to output to WIKI, Blog, websites and (my current favourite): eBooks.
  • More and more important are versioning and collaborating up to the level of concurrent co-editing. All these has been traditionally been handled outside of the Office applications, so the interesting question: will we get back the component idea sported by OCX/ActiveX where one could embed office components in custom applications, but now with web standards (didn't take off before)
  • Meta data handling is still a little thin
But what would it take to make it fully successful? What about:
  • OneUI (pun intended): I don't want to go back and forth between different applications that do the same thing. So if I switch to a HTML based editor I want to do that wholesale. Just give me a local server with the editor on it. And I expect it to update itself. Also don't bother me locally with files and directories to look after (unless I want to), just sync them properly
  • Private cloud support: A lot of documents I'd rather NOT store on a US based server regardless how much I trust my vendor, so the application should work in my own data center too
  • Wiki style version control: I don't want to save whatever.v1, whatever.v2whatever.v3, whatever.v4. GIT and Wave know how to do versioning and I expect a decent UI to show the history
  • Interoperability: Can I invite external parties to one document (and its related actions)?
  • Deep reuse: can I mix sides or cell ranges from multiple sources either by copying or subscribing (so changes there are reflected in my doc too)
Of course there are much more ideas to think about... in due time.


IBM's eMail endgame plan

QuickImage Category
When IBM announced Project Vulcan last year is was a not so specific vision of the future of collaboration. With the announcement of the IBM Social Business Toolkit at Lotusphere 2011, that vision got its technical underpinning: it is based on a number of open standards like OpenSocial, ActivityStreams, AtomPub and others. But hidden in the Press release is IBM's eMail end game plan. While the Notes R8.x mail client was a big step forward there is still that perception (I'm not discussing the validity of that perception here!) that MS-Outlook is the better mail client. I stated before: "Exchange mail servers are the collateral damage of users wanting Outlook", (Again: I'm not judging that "want"), just compare deployment diagrams. So what would happen if Outlook is out of the picture:
  1. Customer deploys the new Vulcan platform (whatever it will be called) on premise, in the cloud or in a hybrid model
  2. Collaboration improves dramatically using IBM Activities and the integration of Activity streams from SAP and other line of business applications
  3. eMail notifications are replaced by Activity streams
  4. Whatever email (Notes, Exchange) surfaces as Social Mail in the new UI. Traditional eMail clients become ghosts of Christmas past
  5. Office documents are moved to LotusLive Symphony (There is no reason why it needs to stay a cloud only solution) or other browser based editors
  6. Suddenly eMail becomes a "backend only" decision since the UI doesn't change when you swap your server. And in backends IBM has really big boxes that are very efficient.
I wonder if that works? (Keep in mind: IBM's plays are large enterprise plays, SMB always has been an afterthought)


Google, IBM to back Kenya programming language for Swarm Computing

Since Oracle went loggerheads with Google over Java and James Gosling, its inventor, joining Google it was obvious that something is in store for the Java programming language. Now IBM and Google are joining forces for the next generation of programming language. Both partners have vast experience in building virtual machines (IBM with their J9 JVM, Google with the Dalvik VM) and operating systems (IBM with AIX, OS/390, z/OS and a few others, Google with Android and their undisclosed GoogleOS that powers Google search).
IBM Distinguished Engineer Noah Mendelsohn explains in an internal blog entry: "Everything comes together nicely. With J9 we have the VM experience, Dalvik runs on small devices, Websphere SMASH (a.k.a project zero) did prove that a VM can host multiple languages with different personalities and our Rational Tool family is ready to deliver. We took a close look at Microsoft's Singularity operating system and their general approach of modelling their tools after existing platforms and concepts. You could say: dotNet started as "Java minus the historic baggage", so now we create "Java reloaded" which will be the best of both worlds".
On the search for a new approach Gosling wanted to stay with his beloved beans, so Google and IBM approached Robert Chatley to expand on the excellent Kenya Programming language. Naturally Robert was thrilled to see his work entering the limelight. While you already can download and play with Kenya 4.6 and Kenya for Eclipse, the real interesting release will be KenyaNG (NG stands for "Next Generation") initially expected in Beta in Q4 2011. The list of features is impressive:
  • The KenyaNG VM will be able to run directly under a hypervisor, no additional OS required. Planned are version for Android compatible phones, tablets, Laptops and Desktops, x86 Servers as well as big Iron running on AIX or z/OS' hypervisor
  • The KenyaNG VM will support all Android APIs and extend them with parallel clustering capabilities. So intensive computations could be distributed over a swarm of mobile devices or a swarm of KenyaNG runtimes running in a cloud. Eliminating the overhead of a classical operating system makes it possible to move from cloud to swarm computing further optimising the use of computational resources
  • KenyaNG is completely running in managed code, so most attack vectors (buffer overflows, code morphing etc.) run empty
  • There will be various language bindings for the KenyaNG runtime, Java being the most obvious. Confirmed are: JavaScript, PHP and Python. In discussion: LUA, Erlang, Lisp, ADA and Cobol
  • Miguel de Icaza has announced that his team will port the Mono project to KenyaNG thus making it a viable destination for dotNet applications
  • Besides the KenyaNG core there are extension layers planned that seamlessly extend the platform with standardised capabilities around data, processes and workflows. The KenyaNG data kernel will offer a unified access to large scale data by directly storing structures defined in UML diagrams. The KenyaNG process engine will provide workflow capabilities that are based on BPML definitions
Kenya co-author Prof. Susan Eisenbach, head of "Distributed Software Engineering" at the Imperial College London is very pleased: "IBM's and Google's endorsement of Kenya shows that we have been on the right track for years using Kenya to teach programming to our student. The probably rapidly growing demand for Kenya skills will provide our students a competitive advantage in the job marked and further enhance the college's reputation for visionary work.". Wikipedia is still a little short on the language, but that will change very soon. And I hope we see XPages running on a Kenya core rather soon.


I know what you did last summer

What sounds like an old horror flick becomes reality. While geeks joke about it, privacy advocates see a loosing battle taking place. But it gets really Creepy when an application written as a thesis is the perfect stalking companion. Ubuntu users do:
sudo add-apt-repository ppa:jkakavas/creepy
sudo apt-get update
sudo apt-get install creepy

Your mileage might vary


IE6 must die!

This is not a battle cry of a Microsoft hater. It is an official Microsoft campaign. While Microsoft's Internet Explorer 9 is a neat piece of software, it doesn't run on Windows XP, Mac or Linux. So my version of the upgrade banner links to something more widely available. Of course that choice isn't without alternatives (from Europe too). May IE6 rest in peace.


Before they are gone: How IPv4 addresses work

In a discussion around IP addresses and routing I had to draw pictures how IPv4 addresses work. When dealing with client, typically DHCP is used, that automagically puts the right numbers into the right place. However server addresses tend to be static (this is a beginner level entry, so static DHCP addresses and dynamic DNS are out of scope) and need to bet setup properly. When looking at an IP address is consists of 4 numbers from 0 to 255 each. In reality that are actually not numbers, but bit pattern of 8 bits each. With 8 bits you can get to decimal 255:
The bit pattern of 197
The bit pattern of 127
The IP address is reserved for your local computer (your home), hence the old joke "There is no place like" (Red shoes not required). Now you will find 3 of these numbers:
  1. The IP address: that is the number for your computer. Typically you have (if connected) 2 of them: and the one you got from the DHCP server.
  2. The netmask: it helps the IP stack to distinguish between "local" calls and "long distance calls" (more below)
  3. The default gateway: The operator for long distance calls
So one could "understand" and IP address as a 4x8 punch card where the 1 stands for a punched hole while the 0 stands for a closed one. Some samples:


What's your Data Definition Language?

QuickImage Category   
After recent insights in data structures I was wondering what's the right format to describe data models. Its a thought almost alien to Notes developers since we "just add what we need". Nevertheless having a data model eases maintenance and documentation (which brings up the question what comes first: the application or the data model). Clarity about data models also fosters the contract first way of developing applications where agreements about interfaces and data structures are made up-front before implementation. There are a number of contestants available to choose from (with no claim to completeness):
  • SQL Data Definition Language. Basically that's all the CREATE TABLE statements you use to create your RDBMS tables and views. Advantage of SQL DDL is its closeness to RDBMS, which makes implementing the described data easy, the ability to create the definition in a simple text editor but also rich visual tools (like ERwin which was one of the first of its kind) and the capablity of other DDL (like UML) to read/write SQL DDL. The biggest drawback in a world where SQL no longer rules alone is its closeness to RDBMS and its lack of support for transmittable data (think web service, sync or two folded MVC pattern). Today I would say SQL DDL is hardly the source of your data model anymore, but an output from one of the other DDLs (most likely UML)
  • UML and its XML representation. The Unified Modeling Language is designed to do much more than describing data. Besides data one can model other structures (like components, deployments or packages), behaviours and interactions. There is a huge offering of UML tools available on the market: Rational System Architect, Visual Paradigm, DIA (or Visio ), Violet, UMlet, Altova UModel (Windows only) and many more. UML can output almost any other format including export to XML Schema. There is also a lot of literature available about this topic.
    While UML certainly covers all aspects of modelling it doesn't come without drawbacks. Working with UML does require a dedicated tool or plug-in, so it is hard to quickly getting stuff done (graphics can get in the way of execution speed). UML offers 2 ways to model data: Entity Relationship Diagrams and Object Diagrams. None of them really fit the document working style we find in web services or web 2.0 applications, so some of the XML Schema generated look a bit "hammered into shape". Furthermore I haven't seen a lot of tooling that would verify data to conformance with an UML model at runtime.
  • Eclipse EMF models. EMF has been designed mainly as a modelling framework to generate code (and thus other data definitions) out of its models. EMF data itself is stored as XML and there is an impressive list of documentation available online and in print. Nathan is a big EMF fan. One would most likely work in Eclipse to work with EMF, I haven't explored the suitability for text editing. There also are a lot of transformations available from and to EMF, which are pending my attention.
  • XML Schema (including RELAX.NG and DTD). When you are comfortable reading and writing XML, using XML Schema comes natural. While you perfectly well can write it in a plain text editor, you most likely will use at least an XML aware editor like oXygen XML, Stylus Studio, XMLSpy (which all run inside Eclipse and are Schema aware), Eclipse's base XML Editor or any other of the countless offerings. If you deal with SOA you will realise, that WSDL, the contract language of SOA, uses XML Schema in its bowels. XML Schema also can be used to validate documents on the fly without the need to first generate code out of it. I like the capability to define my own data types and to mix and match existing Schemas to fit my specific needs. Custom data types would be classes/objects in UML and are (to my knowledge) absent in SQL DDL. Since XML Schemas live at a (usually public) URL and there are a lot of existing Schemas available already (diverse topics like music or eBusiness (UBL) etc) duplication of efforts can be reduced. XML Schema can be transformed to SQL (I would of course rather suggest to use PureXML, but that's a story for another day)
  • I didn't find any tool to model/verify JSON data. Since JSON is by definition schema free. However I expect, that some generic JSON schema will appear over time, the declaration driven validation is too valuable.
Looking at Notes and Domino, the XML nature of XPages, ATOM and RDF, Activity Streams and the IBM Social Business Toolkit I made XML Schema my first choice, however I'm ready to be convinced otherwise.
What is your Data Definition Language?


eMail archival in PDF and electronic record keeping

QuickImage Category  
The question pops up quite regularly: "Our compliance department has decided to use PDF/A for long term record storage, how can I save my eMail to it?" (The question applies to ALL eMail systems). The short answer: Not as easy as you think. The biggest obstacle is legal need vs. user expectation. To make that clear: I'm not a lawyer, this is not legal advise, just my opinion, talk to your legal counsel before taking action. User expectation (and thus problem awareness): "Storing as PDF is like storing on paper, so what's the big deal?" In reality electronic record keeping has a few different requirement (and NO printing an eMail as seen on screen is NOT record keeping - more on this in a second). Every jurisdiction has their own regulations, but they are strikingly similar (for the usual devil in the details ask your lawyer), so I just take Singapore's electronic transactions act as a sample:
Retention of electronic records
9. —(1)   Where a rule of law requires any document, record or information to be retained, or provides for certain consequences if it is not, that requirement is satisfied by retaining the document, record or information in the form of an electronic record if the following conditions are satisfied:
(a) the information contained therein remains accessible so as to be usable for subsequent reference;
(b) the electronic record is retained in the format in which it was originally generated, sent or received, or in a format which can be demonstrated to represent accurately the information originally generated, sent or received;
(c) such information, if any, as enables the identification of the origin and destination of an electronic record and the date and time when it was sent or received, is retained; and
(d) any additional requirements relating to the retention of such electronic records specified by the public agency which has supervision over the requirement for the retention of such records are complied with.
(colour emphasis mine)
So as there is "more than meets the eyes". A eMail record is only completely kept if you keep the header information. Now you have 2 possibilities: change the way you "print" to PDF to include all header / hidden fields (probably at the end of the message) or you use PDF capabilities to retain them accessible as PDF properties. The later case is more interesting since it resembles the user experience in your mail client: users don't see the "techie stuff" but it is a click away to have a peek. There are a number of ways how to create the PDF:


Manifesto for Software Craftsmanship

The Manifesto for Software Craftsmanship has been around for a while (go read it) with a growing number of signatories around the world. Robert Martin sums it up in one short sentence: "We are tired of writing crap". Since crap can be easily mis-understood he elaborates (reproduced here without asking for permission ):
  • What we are not doing:
    • We are not putting code at the centre of everything.
    • We are not turning inward and ignoring the business and the customer.
    • We are not inspecting our navels.
    • We are not offering cheap certifications.
    • We are not forgetting that our job is to delight our customers.
  • What we will not do anymore:
    • We will not make messes in order to meet a schedule.
    • We will not accept the stupid old lie about cleaning things up later.
    • We will not believe the claim that quick means dirty.
    • We will not accept the option to do it wrong.
    • We will not allow anyone to force us to behave unprofessionally.
  • What we will do from now on:
    • We will meet our schedules by knowing that the only way to go fast is to go well.
    • We will delight our customers by writing the best code we can.
    • We will honour our employers by creating the best designs we can.
    • We will honour our team by testing everything that can be tested.
    • We will be humble enough to write those tests first.
    • We will practice so that we become better at our craft.
  • We will remember what our grandmothers and grandfathers told us:
    • Anything worth doing is worth doing well.
    • Slow and steady wins the race.
    • Measure twice cut once.
    • Practice, Practice, Practice.
Go read more from Robert and follow him. While your are on it, sign the manifesto.


XSLT expression when the default namespace is missing

When dealing with XML XSLT is your SwissArmy Knive of data manipulation. Consider a snippet of XML like this:
<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope xmlns:soap=""
                    <List DocTemplateUrl=""...
There are namespaces defined for SOAP, XMLSchema and XMLSchema-Instance, but not for the "payload". When trying to transform this XML using XSLT the usual expression <xsl:template match="GetListCollectionResult"> will not yield any results. I initially was puzzled by that. Relying on geany (highly recommended for light editing), EMACS or Windows Notepad would have left me in the dark. Luckily my oXygen XML Editor revelaled the actual Xpath expression saving me hours of research. There are two ways to target the element properly. One is to add a default namespace into your source XML. This is usually a bad idea, since it requires some manual intervention into data you might retrieve automatically. The other option is to use the local name function in XSLT. So your template would look like this:
<xsl:template match="*[local-name()='GetListCollectionResult']" >.
To get to the "meat" (rather than playing with the wrappers) in the above code snippet you could use this template:
    <xsl:template match="/">
        <!-- Jump the soap wrapper directly to the list -->
        <xsl:apply-templates select="/soap:Envelope/soap:Body/*[local-name()='GetListResponse']/*[local-name()='GetListResult']/*[local-name()='List']" />
As ususal YMMV


Server Side JavaScript - an overview

"JavaScript? Isn't that the browser thing to add useless animations to your website?" Despite all web2.0 hype this is a question corporate developers or managers still ask from time to time. When telling them about Server Side JavaScript (SSJS for short) they reply in disbelieve: "That must be some IBM thing". To some extend it is true. IBM ships three different SSJS implementations in their products:
  1. IBM's own j9 JRE/JDK 6.0 includes, following the Java JSR 223 an implementation of Mozilla's Rhino SSJS engine (try for yourself. Open a command prompt and type jrunscript. You will be greeted with an JavaScript command prompt - works also with the JVM that ships with the Notes client)
  2. The second implementation can be found in Project Zero and its commercial implementation WebSphere sMash
  3. Last not least there are XPages with its JVM based SSJS implementation.
However IBM is not alone, besides JSR 223 and Rhino there is more SSJS to be found:
  • NodeJS is based on Google's V8 JavaScript engine that also powers the Chrome browser.
  • A number of extension build on top of NodeJS:
    • - distributed data scraping and processing engine
    • expressjs - High performance, high class web development
    • many more
  • Flusspferd is written in C++, uses Mozilla's SpiderMonkey JS engine and provides C++ language bindings. Will extend to newer engines when available.
  • CommonJS defines a common set of APIs for SSJS. From the description:
    "The CommonJS API will fill that gap by defining APIs that handle many common application needs, ultimately providing a standard library as rich as those of Python, Ruby and Java. The intention is that an application developer will be able to write an application using the CommonJS APIs and then run that application across different JavaScript interpreters and host environments. ".
    There is a long list of implementations including NodeJS and Apache CouchDB.
    I do not know to what extend IBM does or will support CommonJS, nevertheless it is a development to keep an eye on
  • The Jaxer application server is offered by the same team who brought us the outstanding Aptana JS IDE for Eclipse
  • Erbix Application Server, featuring an online IDE
  • ejscript from embedThis Inc
  • Mynajs application server based on Rhino
  • For brave souls: mod_js a Apache HTTP plug-in
There's a good 2011 SSJS outlook on Of course Google knows even more.
So it is really time to give the curly brackets a try.
Want to know more? Visit my Lotusphere 2011 AD103 session.


If you're attractive enough on the outside ...

I'm a big fan of They make the true motivators. Reading about eMail news this came to my mind:

For historic reasons I'm cautious about any Radicati claims, but when they put Domino ahead of Exchange I simply have to share the picture:

One indicator for my dislike is Radicati's inability to label IBM's software correctly. All other bars have the appropriate product name, while Lotus Domino is lumped under the general Lotus Brand name. I think with a high availability configuration Domino would be rather in the Google range. We can check the Lotus Live Notes uptimes for that. So robustness of the Domino server can't be an issue. The battle is won on the outsite, read: how and what clients including mobile clients will work and how well are they liked.


Expanding into mobile

The new IT paradigma is "mobile first", so I will prepare my skills. Since SWMBO runs on iOS, I'm looking at the other side:
Archos 10.1
Now I need a decent software set for the following:
  • Twitter Client
  • RSS Reader
  • Blog Writer supporting MetaWeblog API for Blogsphere
  • Note taking
  • Video chat client
  • VoIP client (besides Skype)
  • Book reader: Aldiko
  • Remote Access: VNC / ConnectBot
  • eMail: gMail & Lotus Traveler
What else would I need?


Looking for Inspirations for your 2011 Intranet Initiative? (and musings about usability)

Have a good technology platform in one form or the other is no guarantee for Intranet success, even if your platform is leading in social. Intranets need to be usable. Memento bene: I say "usable" not "user friendly". I haven't seen "friendly" software, people are friendly (hopefully). Intranets (or any technology) needs to be fit for purpose and that's called "usable". A lot of times "usable" is perceived as rather "fluffy" description, However there is a clear description in ISO 9241:
  • Usability: Extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use
  • Effectiveness: Accuracy and completeness with which users achieve specified goals
  • Efficiency: Resources expended in relation to the accuracy and completeness with which users achieve goals.
  • Satisfaction: Freedom from discomfort, and positive attitudes towards the use of the product
In other words: Can your users do the right thing, error free, fast, complete and will they be pleased. I recall I once was discussing the design of an accounting package with a tax consultant. I mentioned that account numbers are really hard to memorise and that names would be better. However in the context of use (software for accounting professionals) and the specific users (data entry clerks) account numbers proved to be the superior solution. Users could keep one hand on the numeric keypad to enter data and the other hand was free to flip through the pages. Another example: If you know your commands well, the command line is actually a superior user experience (specified users: knowledgeable about commands) since it will perform given task fast and accurate and leave the users with that pleasant feeling of accomplishment. If you specify the users different (e.g. casual or clueless users) the command line becomes a nightmare and a wizard driven GUI might provide the best user experience. Already Tao Zhu Gong stated 400 BC as first business rule: "Know your people" in his art of trade.
Back to the Intranets. The Nielsen Norman Group just published the 2011 edition of their annual Intranet design competition. Reviewing the winners can serve as an excellent source of inspiration for your own initiative. Of course: don't copy blindly since your context of users, goals and outcomes most likely is different. And don't limit yourself to the latest edition. Reports are available for 2001, 2002, 2003, 2005, 2006, 2007, 2008, 2009, 2010, Financial Service Intranets, Web 2.0 Intranets, Portals and many more. Jacob Nielsen's UseIT always is good for an entertaining read. The NNGroup Usability conference Hong Kong is surly worth a visit.


Roll your own burncharts

Burn charts are an important communication medium to make the status of a project transparent to the users. Instead of showing the useless % complete (useless since the measurement base for 100% is a moving target) burn charts show how much work is left. I have advocated them before. A burn chart allows to visualise the impact of a change request. In my sample graphic it is a vertical red bar.
See what is really happening in your project over time!
I've been asked how I created the samples and I have been suspected of MS-Excel, Symphony, Paint and Gimp. I used none of them. What is needed is what one has on a current Domino server: Dojo. I used the dojox.gfx graphic framework. To draw a graph one just needs to call burnChart("divWhereToDraw",widthInPix,heightInPix,DataAsArrayOfArrays, RemainingUnitsOfWork, displayCompletionEstimateYesNo); where data comes in pairs of UnitsWorked, UnitsAddedByChangeRequests. Something like var DataAsArrayOfArrays= 10,0],[20,0],[20,5],[20,30],[20,0. It is up to you to give the unit a meaning. The graphic automatically fills the given space to the fullest. If RemainingUnitsOfWork is zero it will hit the lower right corner exactly. I call my routine from this sample script:
function drawMyBurncharts() {
  var series  = [[20, 0], [20, 10], [10, 30], [10, 0],  [30, 0],   [40, 20], [20, 0]];
  var series2 = [[10, 0], [10, 0],  [10, 20], [10, 40], [20, 40], [20, 40], [20, 20]];
  burnChart("chart4",1400,800,series2, 200, true);
The whole function is rather short and is visible in the source view of the example page (that's not an image, so it could be extended into drill downs).

Now someone needs to wrap that into a XPages custom control.


Technical Debt

One of the publications that gives you a good overview what is happening in the world of software development is SD Times. Their parent company BZ Media conducts physical and virtual events. Todays invitation rose my interest since it was titled "A Technical Debt Roadmap". From the description:
""Technical Debt" refers to delayed technical work that is incurred when technical short cuts are taken, usually in pursuit of calendar-driven software schedules.
Technical debt is inherently neither good nor bad: Just like financial debt, some technical debts can serve valuable business purposes. Other technical debts are simply counterproductive. However, just as with the financial kind, it's important to know what you're getting into.
Go sign up, the speaker is Steve McConnell, the author of Code Complete. Having grown up in conservative Germany (yes I'm Bavarian) the concept of "Schulden haben" (having debt) had the stigma of failure and loose morals and I'm very concerned when looking at the required interest payments when it comes to debt. A very German engineering trait is that "things need to be done right" rather than "good enough". It is the foundation of successes like Daimler, Audi or BMW. In a nutshell it is the idea that on delivery a piece of work must be debt free. This doesn't translate very well into software, which by the nature of facts can't be debt free. So it is always a balancing act. Ever increasing complexity and growing interdependency seem to favour greater amounts of debt: "Delivery is now, the fixpack is later". However there are different kinds of debt, like in the financial world: your mortgage (hopefully) backed by an asset, your consumption loan, your credit card balance and (hopefully not) that open item from the loan shark in the casino. The equivalent in software would be (in no particular order):
  • Hardcode debt: system names, maintenance passwords, URL settings etc. are fixed values in an application. Debt is due if any of these needs to be changed. Typically can be found in organisations where creation and maintenance of applications are strictly separated or the development team lacks the experience
  • Broken code debt (the classical bug): something doesn't work as expected. Usually gets paid once the bug surfaces and a big enough customer complaints
  • Missing feature debt: Gets paid in a following version - if there is one
  • Missed expectation debt: Sales, marketing or management make promises or announcement that don't get realised in the current code stream. Close relative to the "missing feature debt". Typically that debt is met with product management denial: "How much more do I sell if I implement that?" Denial since the one who created the expectation already "spent the money" and failure to service this debt will lead to loss of repudiation, credibility, trust and ultimately customers and revenue
  • Non-Functional debt: Software does what it should but misses non-functional requirements like robustness (against failure) or resillience (against attack) or integration capabilities. Expensive debt (it would call it the credit card balance of software) since the discovery of resillience flaws often leads to (expensive) emergency actions and the lack of integration capabilities leads to rewrites or addition of a lot of middleware. Also it is difficult to spot since more and more software is business funded where there is little understanding for non-functional traps
  • Bad code debt: The equivalent of a loan shark arrangement. Looks OK on first view, the software works at delivery (the night in the casino wasn't disrupted by lack of funds). Usually payment is limited to parts of the interest (patch it up baby). Leads to abandoning of systems (declared bankruptcy). Typically that debt is hidden as long as possible (who wants to admit to owe to a loan shark) to then take everybody "by surprise".
Paying technical debt requires real money. It is paid back with engineering hours someone needs to pay (if you can find the right talent). As with all debt: if you only serve the interest but not the principal the debt doesn't go away and at some point the interest payments will be more than the principal. Something a myopic quarter to quarter view might overlook.


Unmasking Parasites - Does your website host malicious content?

Every web server, Domino included, serves files from its file system. So once someone gains access to that file system unwanted content can be deposited there. If your content is served out of files (or you don't watch your js files carefully) it is just a small step to serve maleware. The prevalence of FTP and/or weak access security makes this rather easy. So from time to time you want to check if your server (or any other destination) serves something you rather don't want to digest. Unmask Parasites does that for you. I added a link on the bottom left to check this site labeled Site Security Check
As usual YMMV


The Business of OpenSource Software

Richard Stallman the founder of the Free Software Foundation and inventor of the GNU General Public Licence (also known as CopyLeft) defined the 4 freedoms of free (where usually free is classified as "free as in speech" rather than "free as in beer") software as:
  1. The freedom to run the program, for any purpose
  2. The freedom to study how the program works, and change it to make it do what you wish. Access to the source code is a precondition for this
  3. The freedom to redistribute copies so you can help your neighbour
  4. The freedom to distribute copies of your modified versions to others. By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this
The freedoms sound like pledged when entering the a noble round of knights for the betterment of the world. One could easily conclude that such freedoms don't have any relevance outside a small group of enthusiasts. However that is far from reality. Apparently the Linux kernel is to a large extend maintained by paid for professional developers. So what's happening there?
A little economic theory goes a long way here: If a market provides high profit margins it will attract competitors who drive prices down until only marginal profits are earned (also known as Perfect Competition). One set of factors (there are others) that prevents perfect competition are known as Barriers to Entry. For software they are installed base (the network effect) and the huge investment needed to create new applications (also known as sunk cost). Looking at the profit margins of software companies, it becomes obvious that they run very attractive profit margins (the marginal cost for the second license of a developed software are practical 0 - not to confuse with "cost of doing business"), which point to high barriers of entry.
OpenSource lowers the sunk cost barrier by allowing to spread the market entry cost over many participants, so it becomes possible to compete. It is also a way to starve competitors off revenue in their key market to make it more difficult to compete in your own markets (history tells: if the war chests are empty agression subsides). OpenSource can be attractive for customers as well. I've not come across any larger customer where software didn't get heavily customised. The cycle looks mostly like this:
Sequence of upgrades and customisation for vendor provided Software
A vendor releases a software (e.g. SAP, Oracle, IBM or Microsoft) and customers engage professional service to adopt the product to their needs. This service can be rendered by the vendor's consultants, independent system integrators like Accenture, Wipro, TCS etc or by inhouse expertise. Problems or wishes in the core product are fed back to the vendor with the hope (and political pressure) for creation for the next version. Corporations pay maintenance but have little influence on the product manager's decision what will be next. Once the new version comes out the cycle of applying customisation starts anew.
In an OpenSource model the mechanism (ideally) would look different. Corporations would not pay for software licences but for know-how, implementation and help desk. By letting their staff or the staff of the chosen implementer (who could be the OpenSource project principal) become contributors they can yield a much bigger influence over the features and direction of the product:
Sequence of upgrades and customisation for Open Source Software
Money is spend tied to the implementation of specified features. Customisation would flow back into the core product, so once the next release is out these don't need to be reapplied. If a large scale customer disagrees with the general product direction they could fund a fork of the project and go their own way. As the example Debian/Ubuntu shows that separation doesn't need to be 100% but could be to still reap benefits from a shared code base. Also companies would gain the freedom to choose whom to ask to fix a bug (or implement a feature) out of a release cycle. This way they can reduce the total bill (Part of the profit margin stays in the corporation). The lower licensing cost will probably require higher consulting fees. It would be interesting to "run the numbers" and include productivity gains by better tailored software. One big OpenSource platform that is driven by customers (and academia) is OW2 Consortium with an impressive list of infrastructure software. The big wildcard in this scenario are the system integrators. So far I haven't seen them pushing the model: "Let us provide service and support and customisation for your OpenSource". It could be that one one side they don't want to endanger their supplier relations with the large software houses and on the other side the idea, that a company simply could give back their customisation to the core product (and thus potentially to the competition), seems rather alien for business people. Another reason could be that OpenSource is perceived as "risky". Anyway the big vendors have understood this threat, hence the fierce drive to move everything into the (propriety) cloud. We live in interesting times.


Oracle broke the Java forums #fail (and how to use SAX to create XML Documents in Java)

Oracle seems to be over zealous to remove SUN from the face of the IT landscape. SUN used to have a very comprehensive Java forum with tons of Java related knowledge. Now all links, regardless how deep, to the SUN forum are redirected to the Oracle forum Homepage. Yes all of them. So every cross reference linking to forum entries broke. I once contributed a code snipped how to create XML documents using SAX (since most people think SAX is a read-only API, which is not the case) and that link now points to the homepage. Must be some vendetta against a certain ex SUN employee who stated in 1998 "Any URL that has ever been exposed to the Internet should live forever" and even has the W3C on his side. On the other hand:"Never ascribe to malice that which is adequately explained by incompetence". I digged around in the Oracle forum and manage to locate my post, but obviously it was to hard for the database champion to maintain authorship, so the entry is now attributed to: SunForumsGuest. No wonder a lot of people are, let's say "not fully happy" with Oracle.

Anyway lesson learned: contributions I make somewhere need mirroring here, so here we go:

A common mis-perception about SAX: "SAX is a parser, not a generator." As a fact of the matter SAX does just fine generating your XML document, especially when it gets rather large. I've seen countless implementations of String based construction of XML that all at some point in time break since there is one extra " in an attribute or a new line or a double byte character etc. Using SAX all of these issues are taken care of. Your responsibility is to get the tag nesting right, the rest handled by SAX including processing instructions, text content and attributes. Here's a piece of sample code (nota bene: it has a stylesheet instruction, so when you open the resulting file in a browser you get an error since the sheet won't be there):
PrintWriter pw = new PrintWriter(out); //out comes from outside and is an OutputStream
StreamResult streamResult = new StreamResult(pw);
// Factory pattern at work
SAXTransformerFactory tf = (SAXTransformerFactory) TransformerFactory.newInstance();
// SAX2.0 ContentHandler that provides the append point and access to serializing options
TransformerHandler hd = tf.newTransformerHandler();
Transformer serializer = hd.getTransformer();
serializer.setOutputProperty(OutputKeys.ENCODING, "UTF-8");// Suitable for all languages
serializer.setOutputProperty(OutputKeys.DOCTYPE_SYSTEM,"myschema.xsd"); //Replace this with something usefull
serializer.setOutputProperty(OutputKeys.INDENT, "yes"); // So it looks pretty in VI
// This creates the empty document

//Get a processing instruction
hd.processingInstruction("xml-stylesheet","type=\"text/xsl\" href=\"mystyle.xsl\""); // That file needs to exist, or comment out this line

//This creates attributes that go inside the element, all encoding is taken care of
AttributesImpl atts = new AttributesImpl();
atts.addAttribute("", "", "someattribute", "CDATA", "test");
atts.addAttribute("", "", "moreattributes", "CDATA", "test2");

// This creates the element with the previously defined attributes
hd.startElement("", "", "MyTag", atts);

// Now we write out some text, but it could be another tag too
// Make sure there can be only ONE root tag
String curTitle = "Something inside a tag";
hd.characters(curTitle.toCharArray(), 0, curTitle.length());

// End the top element
hd.endElement("", "", "MyTag");

// Closing of the document,
The bonus tip from the original discussion: to keep track of your tag nesting you use a Stack. Whenever you open a element you push the closing tag onto a stack which you then can pop empty, so your nesting will at least be XML compliant.


Red Hat Forum and attention to UI details

Redhat is conducting a Forum in Singapore, Kuala Lumpur and Bangkok. I found their invitation in my eMail today. While the signup form contained the usual lead-generating questions it showed the hand of a careful Ui designer:
Good Selection putting most likely countries first
For the forum one is most likely to attend from one of the 10 countries listed and stated first, releaving users to scroll through a list of places you never will see unless you click on a "Select country" dropdown. I particularily liked the clear labeling as "common choices" and "other countries". Mincing for words one could opt for "popular choices" and "all countries", but that would be the sprinkle on the icing on the cake.
Go and sign up. See you in Singapore Dec 3.


Protect your Domino applications from Firesheep

QuickImage Category  
The appearance of Firesheep and the resulting awareness is a good thing. The threat posed by "sidejacking" of cookie based authentication has been around for quite a while (not as long as other Fire sheep), just use a packet sniffer like Wireshark or any other sniffing, penetration and Security Tools.
Safeguarding your applications requires securing the transmission lines. There are 3 general ways (note: this distinction isn't technical accurate, but clarifies the options): server/application provided, network provided and user selected.
  1. Network provided security can be a VPN or encrypted access points (which still leave options to interfere at the end-points)
  2. User selected are conscious or automated choices to insist on encryption (ZDNet has more details)
  3. Server/application provided is the ability and insistence to encrypt the whole session, not just the authentication process
In Domino this is quite easy:
  1. You need to acquire an SSL certificate either by buying one or create your own
  2. Next you install and activate the certificate on the Domino server. Catch here: you need distinct IP addresses if you have more than one domain to secure. A HTTP 1.1 header isn't good enough.
  3. Now you need to consider: you you want to secure all databases for all connections or only databases where you expect users to login. If you decide on a database per database approach you can check the database properties and require SSL for a connection (that's a good time to disable HTTP access for databases you don't want to access from the web UI)
    Database property for SSL access
  4. If you decide, that any authenticated connection must use HTTPS all the time you can configure the HTTP server to do so. In your server document you should have switched to "Load Internet configurations from Server\Internet Sites documents" long ago. If not, now is the time.
    Configure to load config from Internet sites
    In the internet site document you can decide to reroute all traffic to HTTPS or just the authenticated access
    Security settings in Internet site document
  5. Restart your HTTP server tell http restart
As usual YMMV


Grandstream GXV3140 VoiP Phone and Skype #fail

Skype certifiedFrom time to time I check the Skype website to see what gear is new. My current LinkSys iPhone worked reasonably well but the rubber keypad starts degrading and the speaker phone never was great. When I saw the Skype advertisement for the Grandstream GXV3140 I thought to give it a shot, especially since it carries the label "Skype certified".
The Grandstream website didn't list a Singapore retailer, so I contacted them through their ticket system. They were very fast in their reply (well done!) and pointed me to the Singapore distributor Micro United Network Pte Ltd. They called me the following day to see what they can do for me. So far a very pleasant and swift experience. It turned out that the phone is sold at Mustafa's department store. Mustafa is South East Asia's biggest department store and open 24x7. If you come to Singapore it's a must visit especially in the wee morning hours. It's not the high end store, but you get any category of things, from high tech to a cheap Tuxedo for your 3 year old. I love that place. It's brimming with life any time. The phone section had 2 sets on display demonstrating a video call over 4m distance. So I got myself one.
Grandstream GXV3140 VoiP Phone The phone requires a network cable (a optional WIFI module is available) and can be configured through the phone keyboard and screen (you actually can attach an USB keyboard and mouse) or through a web browser. The phone is preconfigured to use the IPVideoTalk SIP server for the first of 3 configurable accounts (for a full review of the phone check out the TMC Blog). It turned out that the Firmware didn't have Skype support yet and I had to update the firmware. Grandstream provides instructions. It was as easy as pointing the firmware download URL to the Beta site and reboot the phone. This was were the fun ended (and I'm not talking about the Twitter implementation being broken or the scary SIP options menu):
  • Skype is hidden in the Social Software menu, it takes 7 key presses to make a call (9 if the number is not in the contact list). There would be a spare soft key for that
  • I can't select Skype as the primary phone (like when I pick the handle I'll make a Skype call)
  • Skype chats are deeply hidden in the menu even when a new chat is coming in
  • An incoming Skype call disrupted playing Last.FM (good), but it didn't resume after the call finished (bad)
  • Video chat doesn't work. It turns out to be a Video codec issue. The GXV3140 only supports H.263 and H.264 but not VP7 which is Skype's native video codec. On Windows (and Windows only) Skype seems to be able to use H.263/H.264 (can't verify that since I don't have Windows here), but neither on Linux nor Macintosh.
  • The forum entry has a lot of questions, little answers.
So currently it feels Skype is "bolted on" rather than integrated. To be fair: the Firmware is still labelled beta, so there is hope.


Microsoft Office vs. OpenOffice vs. Lotus Symphony

The heat is on, Microsoft pushes against OpenOffice, Infoworld analyses the rationale behind the attack and Lotus Symphony is due for its version 3.0. Imagine for a moment you get hired as CTO or CIO of a large organization. Which one would you pick and standardise on? My take: divide and conquer. You have two groups of users: your existing base with paid-for licences and new users who don't have an [Insert-your-flavour-here] office licence yet. For old world economies the later group might not exist, so we have a clear emerging economy only problem at hand. So for the first group the big question: what improvement would a new version bring? Most likely none given the way office documents are probably used. For the later group a package that allows to seamlessly interact with the first group makes sense. Now you can start arguing if that is given with [Insert-your-flavour-here].
However your real effort should go into a review: what office documents can be eradicated from your organisation. All these stand-alone documents, living on users hard drives or in document repositories form little islands of poorly structured information that are more and more difficult to manage and maintain. We have tons of tools, beginning with eMail, who try to make these office blobs flow nicely instead of starting with information flow in the beginning. All these macro-infested spreadsheets that form the backbone of your monthly reporting would be better replaced by a dashboard, the tons of text document forming the requirements for that software project live happily in a WIKI and the progress reports are just fine in that blog. Need to have a spreadsheet front-end to a database with concurrent editing capabilities? Try ZK Spreadsheet. Need a list? Try Quickr or this. While you are on it make sure all this tooling works well on mobile devices (office documents don't work well). You will reach the point where your remaining document needs will be rather simple. Then go and revisit your Office decision again.


Progress in data structures

Four decades ago COBOL ruled business IT. Its DATA DIVISION. contained all the data structures we ever would need. COBOL had clever constructs like REDEFINES and (in the procedure division) MOVE CORRESPONDING. Of course during the last fourty years we made progress. COBOL data was un-throned by XML (OK I skipped some steps in between) which is getting un-throned by JSON. Comparing the formats you clearly can see the progress made:


            01 Customer.
              02 Name.
                 03 Lastname   PIC A(40).
                 03 Firstname  PIC A(20).
              02 Address.
                 03 Street    PIC X(25).
                 03 Street2   PIC X(25).        
                 03 City      PIC X(25).
                 03 Zipcode.
                    04 Zipbase       PIC 9(5).
                    04 Zipextension  PIC 9(4).
              02 DOB.
                 03 Month  PIC 99.
                 03 Day    PIC 99.
                 03 Year   PIC 9999.


     <Lastname />
     <Firstname />
      <Street />
      <Street2 />
      <City />
        <Zipbase />
        <Zipextension />
      <Month />
      <Day />
      <Year />


function Customer() {
   "Name"    : {
   "Address" : {
                 "Zipcode" {

   "DOB"     : {

Now can someone explain how to do a redefines or a move corresponding in JSON?


VPost needs more attention to security details

I'm using vPost, a service by Singapore's postal service to ship stuff I oder online. vPost provides me with an US, European and Japanese shipping address, so I can take advantage of free "local" shipping or get stuff from vendors that don't ship overseas. After a few teething problems the service works reasonable well, I can recommend it in general. You have to compare shipping rates from the vendor since vPost might not always be the cheapest option. However vPost needs to pay more attention to security. They have the basics right and use https on all their site, so that's OK. They also leverage on "Verified by Visa" that uses one-time tokens via SMS to secure transactions. The improvements needed are after you enter your credit card details and hit next:
vPost securit challenges
  1. The credit card number is displayed in full (other sites only show a few digits). So someone peeking over the shoulder can note it (same applies to the expiry date)
  2. The security code is displayed. It shouldn't be shown AT ALL.
  3. Being security concious (and not liking tracking cookies) I don't allow cookies from other websites. VPost requires me to lower my security standards. I'm sure that could be avoided
Some work to be done.


Lotus Sympony beyond 3.0

Lotus Symphony 3.0 has been in Beta for a while. Features have been frozen for quite some time and it can't be long before its release. The Symphony development team is now moving to code beyond 3.0. While this is not an official process, now is a good time to head over to the Lotus Symphony Ideaspace, proudly provided by Elguji Software. Share your ideas and vote on ideas you find there. The Symphony team listens to the space (that doesn't mean that they follow the requests, but at least they know about it). My favourites are (besides the speed and fidelity improvements of course):
  • Make ODF the default format for the LotusLive Collaborative Editor (a.ka. Concorde)
  • Allow to open HTML from the File-Open menu (or command line)
  • Provide MailMerge to send results as eMail body
  • Add XForms capabilities to Symphony and let the dataset be stored in an NSF


Visualize using Mindmaps

Mindmaps are an incredible tool to collect and share thoughts on any topic in a very compact and comprehensive format. Just look at the sample Ernest did for Water (his current topic in science):
Mindmap about water, click for a larger version
He used iMindmap which has the most natural look from all mindmap software offerings I've seen so far (and is available on Win, Linux, Mac and iPad). It is the commercial offering of Tony Buzan who claims the invention of mindmapping. If you like eProductivity, you might want to look at MindManager, which is primarily Windows (there's a Mac version that's usually behind and there's no Linux version) and can be imported into eProductiviy. Notes user will find MindPlan intriguing. It is available on all Notes client platforms, can show data in MindMaps and Gannt charts and uses NSF as its storage engine. Sharing and collaborating on MindPlan is a breeze. For fans of OpenSource there is FreeMind also available on many platforms. A very different approach is used by The Brain, which allows to dynamically navigate the map and put any topic into the center. Once you are ready to get frequent updates on what's up in the mindmapping software scene, subscribe to the Mindmapping software blog.


Is Internet Explorer holding you back?

There is a paradoxon going on in corporate IT (probably more than one): On one hand developers whisper "Our standard is IE[6]" on the other hand managers buy iPhones, iPads, Android phones (which are all WebKit based) and demand that all applications should move to browser based access, being Intranet or the cloud. Development for browsers is painful compared to client environments (you need to know at least 4 totally unrelated - in terms of syntax - technologies: HTML, CSS, JavaScript, HTTP). HTML5 will address some of the pain (while you still have to learn the 4 technologies). Looking at the browser's HTML5 capabilites you only can conclude, that your mobile device will outshine your desktop browser by a large margin:

Summary of HTML5 support per browser

Calculation of support of currently displayed feature lists

Internet Explorer Firefox Safari Chrome Opera
Two versions back 6.0: 3% 3.0: 42% 3.2: 57% 3.0: 76% 10.1: 51%
Previous version 7.0: 10% 3.5: 70% 4.0: 78% 4.0: 81% 10.5: 71%
Current 8.0: 25% 3.6: 76% 5.0: 86% 5.0: 85% 10.6: 77%
Near Future (2010) 8.0: 25% 4.0: 90% 5.0: 86% 6.0: 89% 10.6: 77%
Future (2011 or later) 9.0: 58% 4.0: 90% 5.*: 88% 7.0: 90% 10.7: 78%
(Table found here). IBM made the decision to move to Firefox. Is it time for you to move too? A few places to check out on the new capabilities:


DIA an OpenSource alternative to Visio

IT people like to draw diagrams. The usual weapon of choice is Microsoft Visio. When on Mac or Linux, that's not an option. A cross platform (and OpenSource alternative is DIA. It runs on Windows, Linux and (thanks to the Darwin Ports Project) on Mac OS/X. It is by far not as powerful as Visio and doesn't produce shiny business graphics like SmartDraw (which I use for a lot of the blog illustrations here), but it clearly qualifies as "good enough to get the task done". One sour point in the past was the lack of nice looking objects for network diagrams. Now Hagen notifies us of the Gnome DIA Icons project.

They look good to me.


Backup vs. Archival and Thoughts on Archival

Archival often gets confused with backup. The activities are (technically) very similar and invite such a confusion. Both are the action of moving bits from "the place where everybody looks" (the mailbox, the current database, the file share, the intranet etc.) to some other place (a backup tape, a cheaper storage, a CD-ROM /dev/null etc).

Backup is for the sole purpose to keep data available in case the main storage area is no longer available (due to accidental deletion, soft- or hardware problems).
Archival is the removal of data from the "main area" to an "archive area" for later retrieval for historic or compliance reasons. A secondary motive for archival is to remove obsolete or less relevant data from the active work area to improve performance, shorten search time or save on storage in the system hosting the active work area. To confuse matters further: quite often technologies designed for backup are successfully used for archival (e.g. copy data to a removable storage like a tape or optical disk).

In other terms: you don't expect to ever restore a backup unless something went wrong, while accessing an archive can be part of a regular business process. There are a few perceptions about archival that need to be put into perspective:

Archival does not save any storage space!
At least not when you look at all storage across the Enterprise. However it can help saving storage on your active work area (which is most likely the most expensive one) and so help saving storage cost. IMHO the biggest advantage of archival is the reduction of data a user would look for, since the current work area only would contain relevant data. This is also the greatest peril of archival: when data gets archived too early and the archival location turns into yet-another-work-area-to-check. (OK your archive might use a better compression that your life system - but are you sure that is isn't just a backup?)

Archival needs information life cycle management
Every information has a certain life cycle. Like food items information has a "best use before" data (that varies depending on the purpose). It follows roughly the following pattern:
  • New: freshly created, might not be relevant yet (e.g. upcoming policy change)
  • Current: data supports one or more business processes and is actively used
  • Reference: data is no longer actively use, but is regularly required for reports or comparison
  • Compliance: data is obsolete but needs to be kept for compliance (e.g. business records in Singapore : 7 years)
  • Historic: the data doesn't need to be kept, it doesn't serve any active business process, but might be of historic interest. This state of information is a field of tension between (corporate) lawyers and historians: historians like to keep everything, while lawyers see a potential discovery risk (cost and content) in every piece of data kept. When analyzing the archival policies of any organization one can find out who won this conflict.
  • Obsolete: In 2050 really nobody cares how many rolls of toilet paper you bought at what price (while the price volume of toilet paper might still be of historic interest as curiosity how mankind could be so wasteful with resources before they had the self cleaning buttock nano coating)
Data might skip some of the phases. As one might notice I'm speaking about "data" in general. The life cycle applies not only to documents but to all sort of information. Now to have a successful archival strategy the status of information in that life cycle should be explicit known for each piece. Unfortunately this is still the exception rather the rule. Short of an explicit expiry data we make implicit assumptions like "Unless stated otherwise a document in this place expires xx days after last update" or "Unless stated otherwise a document in this place expires xx days after last use". Since usage is much harder to track (if one looks at an information to then figure out that wasn't what she was looking for, an automated system would count that as usage - bad. Or I use the search engine and the search result shows already the information, so I never open the location - document expires being unused - bad) the most prevalent measure is "last update". Some clever verification  cycle asking the owner to extend the validity is needed. But better have a clever one. If that turns into a one-bye-one update exercise nobody will bother. A very good rule engine can help there. Most of the technical troubles (short of broken equipment) you might experience with archival are rooted in strategic (mis-)decisions.

What's your Retention/Archival policy?


Presenting New Software to Business Users

We get exited about new software releases. Full of enthusiasm we swarm out to tell the world of business users about all the shiny new features. We are met with a shrug. What went wrong? We used the wrong approach! We drowned our audience in the river of features. When presenting new software to business users, be it a new product or an upgrade to an existing software, listing features (how kewl is that...) won't excite anybody outside tech. We need to tell a compelling story how the new software improves the utility of users' computer use. In other word we need to clarify what's in for them.
So you structure your presentation around business scenarios and how the set of abilities in your software benefit this scenario. Software designers (at least the enlightened ones) use Personas for their scenarios. You standing in front of an audience you (should) know, of course pick examples that are relevant to the people that look you in the eye in that very presentation. However a good business scenario is not good enough. They way you communicate it is essential. Anybody loves a good story.
The best advice for structuring your presentation I found so far is provided by LeeAundra Keany a.k.a. The Contrary Public Speaker. She strongly suggest to use a classic speech approach as used by Aristotle, Plato, Quintillian or Cicero. Her presentation model consist of six simple elements (summary and explanation partly from the book, partly by me):
  1. The Message
    You need to answer the question: "What do I want the audience to do and why should they do it". For software demonstrations the "What" seems simple: "buy my product" or "demand the upgrade", but after a little soul searching you might end up with "See a different work style", "change their attitude towards ..". The why is trickier. Here you need to know your audience well - it's the What's in for me? question. A good message is clear, focused and compelling
  2. Audience Analysis
    LeeAundra sums it up: "The quality of the speech itself is powerless against the preconceived notions of the listeners UNLESS the speech and the speaker understand and deal with them". So be clear who is your audience. The CEO pitch differs from the CIO pitch and differs from the message for the personal assistants. You need to be clear about three questions: "What do they think of your message? What do they think of you? What is their state of mind?"
  3. The Speech
    Good speeches are short to the point. Good demos too. Your biggest mistake is to walk through endless variations of the same. Prepare the variations if our audience demands more, but keep your plan to the essentials. Write down your speech. Use simple words (keep in mind: simple doesn't mean simplistic!). Good speeches are highly structured and so should be your demo. Never just jump in, explain what will happen beforehand. I call that the "dentist model". (S)he will first tell you "I will hurt a little" before yanking out your teeth. The structure consists of 5 main components:
    1. Introduction: You provide an attention getter (no joke please unless you are really funny), explain why this is important - plan that message well, you will tie back to it - and the preview what you will show and tell in the body. For a detailed discussion of attention getter options see the book.
    2. The Body: The main part of your speech. Every item (don't have too many of them) has a point - a business case so to speak. You state your argument and then provide supporting evidence. In a software demo that's the part where you click around. You sum up the learning points and provide the transition to your next point. Build your points around utility rather than feature by feature. I experienced repeating features deliberately in different combinations for different use cases works very well.
    3. Preliminary Conclusion: You sum up the arguments you build during the main body. Don't stop like "That's what I wanted to show". You can and should tie back to your introduction. Something like "I promised to show you .... and I have delivered by ....". You also can state what else is possible that you didn't cover. Lead up to the Q&A session
    4. Question and Answer session: Of course you know your software, so any questions about functionality should come easy. However you need to be prepared for question around statistical evidence, reference customers implementation needs etc. Quite popular are questions how your product compares in function or market share to your competitors. I typically turn questions "what's about feature X" into "How do you solve the use case where product Y is using feature X with my product"
    5. Final Conclusion: Don't end your presentation with the last question from the Q&A session. Tie back to your message from the introduction. Rule the floor.
    This is a *very* compressed summary. Go read the book.
  4. Delivery
    You need to practise. Practise. Practise. In IT we are tentatively guilty to use to much insider lingo, so watch out for that (and watch your use of TLA). Also slow down. Speaking faster you risk loosing your audience. Pause to let key points sink in. Silence is not dangerous. Watch your non-verbal delivery: posture, gestures and eye contact. My own posture greatly improved after practicing Tai Chi and Martial arts. Stand straight, feel both feet on the ground, be in the moment. Look at your audience. Eye contact is king. This is another reason why you want to explain things before you click through them. While you click everybody's eyes are elsewhere. And - your are not a caged animal: Don't pace.
  5. Visual Aids
    LeeAundra states: "The Madness Has to Stop!" I second her. Go look at Presentation ZEN (or buy the book) Nuff said.
  6. Question and Answers
    This most likely is the most important part of your speech. You switch from story telling to conversation, from showing to interacting. In the Q&A session you will reveal how well you understand your topic (and the audience), where your passion lies. A good answer to a question accomplishes three things: it answers the question, it strengthens your argument and it reaffirms your message.
LeeAundra has a Podcast and sells her book online. Go it's one of the best career investments you can make. I love the final sentence in her book:

"Now go out there and impress everybody!"


Sync to success

I love the cloud. It provides access to all my data from any device, any time from anywhere (and for anybody when the next security hole is discovered). No headaches about backup, storage and undelete. I'm on cloud 9.
Not so fast. When I need the service most it is down, the network is slow or latency is unbearable large. The cloud turns toxic. But I want my cake and eat it too. And yes it is possible. The secret is called sync (others call it cache mode). I only mostly want to interact with local apps and local data. These local apps send and receive data from the cloud as and when they have connectivity. They do that in the background, they don't bother me. The apps update themselves from the cloud if I permit it (individually or blanket permission). The apps are smart enough to figure out what data can be kept local based on context and device. To achieve that synchronization is king. There is still dust over the sync protocol battle. We have SyncML, CouchSync, NRPC, HTML5 sync, .net Sync, Expeditor sync and many others. Partial sync (what I would call "contentual sync) hasn't been solved satisfactory. What is your sync strategy?


Structuring IT lessons

Teaching complex topics is as much an art as it is a craft. To become an artist you have to be an artisan first. One of the tools of the trade are clear structures in your teaching materials. While working with Digicomp I was introduced to TeachART that anwers the question for adult education. Based on that knowledge and my 2 decades of training experience I found the following structure to work well for study assignements.
  1. Learning Goal
    Introduce the exercise and what you will learn. E.g. "In chapter 7 of 'Cullinary survival for geeks' we will learn how to prepare Spaghetti al olio."
  2. Learning Rationale and Time
    Explain why one wants to learn this specific skill E.g. "The dish is rapidly prepared providing advanced carbs for mental activity without distracting too much from other work. The light oil coating makes them tastier and plesant to eat."
    How much time should this exercise take. "Allow 25-30 minutes for this exercise. While the noodles cook (about 20-25min) you can chat with your classmates"
  3. Prerequisites
    What do you need to know to succesfully follow and complete this exercise. E.g. "Before attempting this exercise you should have successfully completed chapter 3 'Boiling Noodles'". Sometimes you can condense the prerequisites into a short statement: "All exercises are designed to be completed in sequence, the results are the prerequisites of following exercises". Of course you need to formulate the prerequisites for the whole class clearly: "You are a geek who is fed up with ready processed food and like to try something new. You can handle a kitchen knife without major injuries to yourself and others. You are willing to eat your own cooking - if we are successful."
  4. Success control
    Tell how the succesful outcome will look like, so students can gauge for themselfs how successful they were. This is very important. Try to give as much indicators as possible for a self assessment. For software screenshots are a good approach. Don't describe what to do (that's the next step) but the outcome and how to verify its success: "Your noodles will have a bite that is soft at the outside with a little firm core. You can see a gloss on the noodles from the olive oil but no puddle of oil below the noodles. See this picture..."
  5. Detailed Steps
    The "meat" of the chapter with steps to follow. Design them matching the audience. If you describe the steps in too much detail they become boring, if you are too brief students get lost. If an exercise includes steps done before, refer back to them. "Prepare noodles like in chapter 4. Use frying pan #3 to heat 80ml of oil. Use heat #2 on your oven ...."
  6. Food for thoughts / Things to explore
    What else could be done. What are variations of the task. This is an important buffer for your fast students. If they finish ahead of time they can deepen their understanding with additional exercises. "You can cut one clove of garlic before heating the oil and mix that into the oil when it is hot."
  7. Related information
    Where can the students find more information. Like variations of that exercise, background information or alternative approaches. "In our cookbook page 321ff you will find more spaghetti variations: spaghetti al pesto, spaghetti carbonara or spaghetti al pomodoro. A discussion on carbohydrates as brain food is on the course website together with tips how to pick the right oil for your taste"
  8. What's next
    Again you could shortcut that by the implicit sequence of exercises or you suggest a learning path. "The Spaghetti have boosted your brain functions, so you have completed your work assignement ahead of time, so you will learn how to reward yourself with a nice dessert in chapter 8: 'Pull me up - Tira mi su'"
So it is: what, why, how. Creating good materials is hard, time consuming work and as usual YMMV.


Open Standards

Open Source is something different from Open Standards. We like to confuse the two. An Open Standard can be implemented in 100% proprietary software while Open Source software can implement proprietary standards (is that an Oxymoron? - prevaling practice might be a better word here). One example is Gnome Evolution implementing the proprietary Exchange wire protocols.
Hugo Roy, in an open letter to Steve Jobs sums up Open Standards for the busy reader:
An Open Standard refers to a format or protocol that is
  1. subject to full public assessment and use without constraints in a manner equally available to all parties;
  2. without any components or extensions that have dependencies on formats or protocols that do not meet the definition of an Open Standard themselves;
  3. free from legal or technical clauses that limit its utilisation by any party or in any business model;
  4. managed and further developed independently of any single vendor in a process open to the equal participation of competitors and third parties;
  5. available in multiple complete implementations by competing vendors, or as a complete implementation equally available to all parties.
Steve disagrees citing "An open standard is different from being royalty free or open source". While the later part is almost without discussion (some claim a standard can't be truely open if not at least one Open Source Implementation exists, thus the standard being like a class and the source being the object instance of the class), the former is hotly debated. In one camp the term "without constraints" is interpreted as "patent and royalty free" while the other camp ( including IETF, ISO, IEC, and ITU-T) wants to allow for "reasonable and non-discriminatory" patent licence fees (RAND). The FSF would see RAND rather as a short form of RAN(D)som. We live in interesting times.


Carrots and Sticks

My friend Michael Sampson reports back from the Salesforce "Dreamforce" conference. In tune with his latest book he enlightens us about user adoption. John McGuigan of Fiberlinks Communication presented The Cardinal Rules of User Adoption:

I absolutely agree that utility trumps any carrot or stick. If a tool helps you to "be more efficient in what you do every day" (Incidentially the Lotus motto) you won't need external motivators but rather traffic management to handle the user rush. A good example are mobile devices. Hardly any organization needs to advertise the use of a smartphone internally. Users want them, want them badly since their perceived advantages are obvious.


No more SIS in MS Exchange 2010

Ferris analyst Bob Spurzem covers news around MS-Exchange. In this entry he hightlights that MS Exchange 2010 has removed the Single-Instance-Storage (SIS):
One of the lesser-known changes to Exchange 2010 is the removal of single instance storage (SIS). The reason for this is related to an architectural change, disk I/O performance, and the availability of cheap disk.
There tends to be a trade-off between better disk I/O performance and reduced storage capacity. Architecturally, Exchange 2010 introduces a new per-mailbox table structure that replaces the original per-database table structure. The original per-database table structure was optimized for SIS, but disk I/O suffered. The new per-mailbox table structure improves disk I/O, but without SIS.
In place of SIS, Exchange 2010 uses compression. Only large, redundant attachments files truly benefit from SIS; otherwise, compression delivers roughly the same volume of data as SIS."

Well MS sales people always had claimed (never backed by figures from real deployments) that SIS was a space saving advantage over Domino's one-man-one-votedatabase approach. Guess they learned the scalability lesson the hard way. Now if you want SIS for attachments and design compression and data compression - Domino is your answer.


Client Application Platforms

There are wonderful and awesome strategies how to organize your data centers and back-end data processing. Often these back-ends are supposed to be accessed by "Thin Clients", replacing "SickThick Clients" or "Fat Clients" which are considered "legacy (read: tried, tested and boring)". Looking at the memory foot print of modern browsers I can't see the "thin" part. My guess the "thin" actually should mean: "Comes with the Oerating System and doedn't need to be taken care of." Never mind the security patches and frequent updates. The opponents of "Thin Clients" coined the term "Rich Client" which indicates connectedness and rich interaction models. The IMHO real difference is single purpose, disconnected clients (like your old school spreadsheet, minus Quickr/Sharepoint) vs. connected application platforms. And looking at the platforms the dust hasn't settled (yet). Regardless of what platform one picks, the challenge today is device diversity. You might have standardized on X, but you can make a save bet, that a C level executive will hand you a device Y and demands to make it work for your enterprise applications (typically Y ∈ [iPhone, Palm Pre, Blackberry, Android, {Stuff-you-never-heard-of}]). Anyhow you face the real-estate challenge:
1920x1200 is 30x bigger than 320x240
Your 24" monitor (60.96cm for readers who live in the EU) with its 1920x1200 resolution shows 30x more pixels than the small smart phone with its 320x240 screen. Your strategy should allow for as much reuse as possible. Here are the current options as seen through my personal bias:
  • HTML: To be correct you would need to state: HTML, CSS, JavaScript and DOM. With the raise of Ajax (Sometimes available technologies "just" need a name to become popular) this seems to be the predominant direction most enterprise developers are taking. Supported by frameworks like Dojo, Prototype, JQuery and others creating rich interaction became way simpler. IBM settled on the Dojo toolkit for all their products, so learning Dojo is a worthwhile investment. Luckily by now here is rich documentation both online and offline available.
    The base line for this approach is support for IE6 which severely limits the platform. If you don't use any of the toolkits you are also hampered by little incompatibilities between the browsers (Quirx mode anyone). Further challenges are (not complete): the lack of local storage other than cookies, no native media capabilities and no uniform extension model. Clearly a legacy platform. This highlights a big dilemma for "thin clients": The browser available on the workstation does matter and the idea of "everything on the server" stays a pipe dream. While all you need to develop in HTML is gEdit (notepad if you are on Windows), you want to use a powerful IDE and a strong debugger
  • HTML5: This includes CSS3 and a host of new capabilities like <canvas> or <video>. The most prominent representative of HTML5 execution environment is WebKit, the engine powering Konqueror, Safari, Chrome and others. Webkit is also used in iPhone, iPod/Pad, Android and Nokia's Symbian S60 platforms. So WebKit is well positioned for both mobile and PC space. Firefox and Opera also support HTML5. HTML5 provides local storage which let Google to abandon their own toolkit for that (Google Gears). Notably absent from full support for HTML5 is Microsoft's Internet Explorer 8.
    HTML5 is still a very young standard, so some implementation hiccups can be expected (just check for video support) across browsers. Using the same toolkits as mentioned above, you have a save strategy going forward. What HTML5 currently doesn't resolve is cross domain aggregation, this stays a server side task. IBM has committed to HTML5 (not only) as part of Project Vulcan. IBM also spend quite some effort to design a style guide (called IBM OneUI) to ease design decisions for developers. What HTML and HTML5 don't define is the communication between individual modules. Here independent specifications like iWidgets (Part of OpenSocial) need to fill that gap


% complete is a useless measure

At least when it comes to software development. Rather use "tasks/units of work left to do". Geek and poke sum it up nicely:
Do you want to know more®? Read a book and sign the manifesto


Engaging the OpenSource Community - Government Style

The Federal Government of India is undertaking a mammoth task of assigning a unique identification number to more than a billion Indians. There is an organization named UIDAI set up for this really interesting project. Since it is government you have committees and tenders. The database will use biometrics to establish identities, which someone could either brand as an Orwellian nightmare or a step to establishing enforceable citizens rights (you need to be someone to own something, especially title deeds) for everybody.
A quick peek on the site shows that it is running Apache/2.2.3. UIDAI seems to like the idea of OpenSource and invites contributors to participate in the project. I think this is a great idea to promote Indian technology and Indian engineering proficiency. Securing a database with a billion biometric profiles that needs to be accessed by thousands (if not millions) of legitimate users is a dream challenge for any IT architect and security professional (what type of dream is in the eye of the beholder). It is also very laudable, that all the documents are in the open for the world to see. I like this transparency. But where is light there is shadow. Reading the code contribution statement I find (emphasis mine):
  1. If the Client Software or a module developed by any Developer is accepted by the Authority for implementation in the field for enrolment, the contribution of the Registered Developer will be recognized. However, the source code, documentation and IPR will belong to the Authority. Accordingly, the Registered Developer will be required to enter into appropriate agreements transferring all rights and intellectual property to the Authority for their product and contribution.
  2. This effort for creation of the enrollment software is completely voluntary and the Authority is under no obligation to provide any financial incentive or consideration to the concerned Registered Developer for the product.
Let me translate that into plain English: We take your code, we take your rights, we won't compensate you, so you are in for the fame [only] (and in #8: we are not liable for the rights we hold). While later on it is stated, that the authority might open source the code at their discretion, the #6 requirement is in direct conflict with any known OpenSource licence like the GPL, LGPL, MPL, APL or any of the other OpenSource licences.
I'm curious how that will work out.


Does that apply to Software too?

Should be framed and hand over every product managers desk like a Damocles sword:
Do something
Applies to developers too.


The books you read the stuff you know

I see customers a lot. Business people, IT people, sales people and sane people. Since I'm not an IT graduate small talk often turns to the source of my knowledge. "Study hard, play hard" is the opener I usually use. While the internet has taken over as dominant source, I still fancy books and have collected some over the past few years. When I moved to Singapore 10 years ago, I left most of them behind, so I had to rebuild my library. In the section "Computer and Internet" I have currently 64 items which you are welcome to review.
I'll add other books about business and leisure some time in the future. So stay tuned.


Cost of messaging storage not an issue?

David Ferris has a blog entry titled "Cost of Exchange Storage Not An Issue". What he says is not Exchange specific (it is only the first time that Exchange would be capable of supporting large mailboxes) and can be applied to any messaging system. David states:"Users will have large mailboxes. 5GB to 20GB will be common". With storage cost (again David's figures) of $2-$30 per 30 GB that translates in per user/per year cost in the single digits. I'm not so sure about these figures. They might be true for home storage, but for large scale enterprise storage systems David might be off by more than one dimension. Storage cost isn't a linear figure, but has increasing marginal cost once you reach size limits.
What's your take. Are large enterprise mailboxes coming?


How to generate an installable list of Firefox plug-ins/addons?

Question to the lazy web: How to generate an installable list of Firefox plug-ins and addons from the plug-ins and addons you actually have installed? I have a set of plug-ins and addons in my Firefox which a lot of friends/colleagues would like to use too. Ideally I would be able to press a button and generate a HTML page with links to the original install locations. After all the plug-ins and addons "know" where to look for updates for themselves. A GreaseMonkey script would be acceptable too. Anybody got an idea how to do that?


ITSC Synthesis Journal 2009

Singapore's Information Technology Standards Committee is publishing a yearly Journal with articles related to IT standards and IT trends. In the 2009 edition (Link might only be active after a while, check the In the 2008 edition until then) I contributed an article titled "Implementing web2.0 in the Enterprise". There's a real printed version of it too.


Online storage / backup

Question for the lazy web. There are a number of services around that allow you to keep a local folder in sync with a web storage and (optional) a folder on other machine(s) There is DropBox, the Ubuntu service Ubuntu One (UbuntuOne uses CouchDB as its backend, way to go Damien), SugarSync, PowerFolder, Little Networks, DocuSync, ZumoDrive and many others. Some offer a free limited account, some a trial account.
Which one works for you (and why).


Microsoft's licences to complicated - let IBM help you.

There is an interesting story on slashdot pointing to the complex Microsoft licensing model and Steve Ballmer's statement not to change that anytime soon. One nice snippet from Steve:"Customers always find an approach which pays us less money". IBM is helping customers exactly do do that. Our Project Liberate helps to minimise Microsoft software cost without breaking compliance. Now we only need to simplify our own licences, but that might come soon.


In defense of the Inbox

Web 2.0 promoters(including myself), new emerging technology and a lot of offerings suggest the imminent demise of the inbox in favour of Wikis, Blogs, Activities, Tweets and all the other shiny new technologies. However once you descend from "the Olymp of IT- savvy ness" (or ascent out of Hades, depending on your point of view) you will find many uses clinging to their inboxes rallying around the battle cry "Out of my dead cold hands". Working on the technology forefront of collaboration one might easily dismiss this as fear of change. However on a closer look, you will realise that this users are right (I hear the howling of the web2.0 crowd, but bear with me and read on). As you might know, I am a big fan of GTD (and its Lotus Notes incarnation). When you look at its model of operation you will see a big box labelled "IN" at the very top and beginning of dealing with all the stuff entering your life. Any action you take, any decision you make starts with (explicit or implicit) "It came to my attention". The inbox supposed to provide the single point of entry for electronic attention. If I look at the current web2.0 offerings and presentation I see the digital equivalent of AHAD. Tweets compete with feeds with emails with chats for my attention. Neither of them magically increases the 86400 seconds that make my day. (And there is nothing easier than immerge yourself in a stream of twits and feeds to procrastinate that boring task). As a non-IT professional I'm working on problems like: how to run this project, how to close this deal, how to construct this engine, how to heal this patient. This are important questions and I don't want to be distracted by "how do I communicate with [insert name here] about [insert topic here]". So I go to my inbox and hit a button: New. So what makes a good inbox:
The eight properties of the universal Inbox
  • It is personal: It is my Inbox. I am in charge what ends up there, how long it stays and how it is visible (eMail inboxes fail miserably here)
  • It is complete: any time of digital artifacts is visible here: eMails, chats, files, wiki pages, blog entries, workflow items, custom application notifications. It doesn't mean that all these need to be stored inside the inbox, they could be just rendered there (automatic or on a click)
  • It is structured: It is not just a list of all items by date, but offers virtualization based on all available meta-data. All items can be shown in context (automatic and manual): what other item are related to the current entry (Google waves rides on that idea)
  • It universally available: It follows me. I can access my inbox Online, Offline, Mobile
  • It is actionable: Acting on new items directly from my inbox like: Turn any type of incoming item into any type of outgoing item (reply with blog is my personal favourite)
  • It is technology agnostic: Items can end up in my inbox using push technologies like eMail, web services or message queue or pull technologies like RSS and ATOM or custom transports provided by my applications
  • It is unlimited: Whatever enters my inbox stays there until I say otherwise (where "say otherwise" would include compliance and retention rules). The idea of an archive is a technical implementation detail I don't want to bother with (if that happens behind the scene, so be it).
  • It is synchronised: Items showing up in my inbox are synchronised with where they came from. So a presentation updated will reflect in the file in my inbox - unless I don't want that. (Technically that is what you do with Quickr, if it would sync into offline for me)
Looking at that list you will realise eMail ≠ Inbox. It is much more of that. There are a number of attempts to create the universal inbox: Websphere Portal (not perceived as personal), Lotus Connections Homepage (everything but eMail) or Lotus Notes sidebars & composite applications (promising but not complete). None of them is complete yet (some of them show the potential to eventually get there). We need to broaden our understanding of inbox. It shoud not be "the place where new eMail messages arrive", but rather "the place where I'm ready to pay attention to new [things|stuff|information|...] and can find what I'm looking for." I had an interesting insight recently in a chat with a (very competent) secretary. She stated "I keep everything in my inbox". Mentally rolling my eyes I set on to explain about information management with my general opening question "Hmm, interesting. Show me". To my surprise her eMail inbox was empty, she had nicely labelled action folders, information she wanted to keep was in her journal, she kept track of co-workers and friends with the Twitter plug-in, the Sametime UC2 plug-in gave her access to voicmail etc. For her anything in Lotus Notes was "The Inbox". Which just confirms my point of view: We need a single point of entry for digital items.


Accessing EXT2 data from Mac OS/X - works on Snow Leopard (partially)

SWMBO and "The Gentlemen" use Macs at home. I got a bunch of disks formatted with EXT3 and EXT4 wich they also want to access, so I went out to research the topic. The EXT file systems have the unique feature to be backwards compatible. So a driver written to access EXT2 will still be able to access an EXT4 disk (obviously without having access to newer features). There is a commercial product available, but that wasn't what I was after. After wading through a lot of discussion board posts I did the following:
  1. Download and install MacFuse
  2. Download and install MacFusion. MacFusion is the GUI to configure MacFuse. Unfortunately it turned out, that they don't support EXT yet, but you can help and vote for this enhancement
  3. Download and install MacPorts (odd: I needed a reboot or it seemed so to get it to work)
  4. Open a terminal window and type: sudo port install ext2fuse You have to provide your password. That command send off my Mac for quite a while running a GnuMake/TCL script to download and configure all dependencies: expat, gperf, libiconv, ncursesw, ncurses, gettext, ossp-uuid, pkgconfig, e2fsprogs, macfuse. (failed since the macefuse installer didn't recognize SnowLeopard. I tried to hack the PortFile with little success)
  5. Download and install Fuse-ext2 from Sourceforge gives you - when auto mounting - read-only access (and a nice setting in the preferences)
  6. Reboot the machine and your EXT2/3/4 drives show up in Finder. I don't know if that works for internal partitions, but it worked for the one connected via USB very well

Lessons learned

  • On Snow Leopard EXT2FS didn't work
  • Downloading the sourcecode for Ext2Fuse from SourceForge and trying to compile on 10.6 (sudo ./config, sudo make, sudo make install) didn't work either
  • Installing MacFuse before MacPorts didn't help to avoid the dependency check
  • There are two project porting Linux software to the Mac: DarwinPorts and MacPorts. Downloading DarwinPorts gave me a File MacPorts1.7.0.pkg, while MacPorts delivers a file MacPorts-1.8.1.pkg. The two projects seem to draw from the same source
  • There seems to be way to go before everything works back on 10.6
  • Once you leave the realm of i[Insert-your-average-Mac-application-here] it gets as powerful and as complicated as any other OS
As usual YMMV.


Java ClassLoader fun with getResource()

DXLMagic will have a Java UI. To be flexible I store my UI definitions inside the JAR file and load them at runtime. I learned a few lessons in the process:
  • When you use Class.getResource() inside a static method of a class you won't get access to any of the resources in the JAR. You need to have a normal instance of a class.
  • Instantiating an object inside a static method doesn't help either.
  • Class.getResources("*") doesn't return anything even if you have valid resources in the JAR. So wildcards either don't work or work very different.
  • It really pays off to change the editor preferences in Eclipse to nag more about coding style and potential code problems
  • Crap4J and PMT are your friends, as are Coverclipse
  • Debugging in fun in Eclipse unless you only have one smallish monitor
As usual: YMMV.


Help needed: copy Eclipse IResource from/to files using ANT

In Eclipse 3.0 one big change was the transition from a physical file system to a virtual file system. Instead of Java File Eclipse is using IResource to access file systems. Of course using files still works. As far as I know the Eclipse ANT task is working on files not on IResources. I'm looking for a way to use Ant to copy from IResource to File and back. Could be a custom ANT task. If someone wants to write it (and contribute it back to the ANT contrib project) I'm willing to make a donation <g>. Ideally it would use the same syntax as the ANT copy task.
Update: It is done, not in a generic way but suitable for Domino Designer. One can download the Import/Export Plug-in for Domino Designer from OpenNTF. The plug-in has an ANT interface.


How CRAPpy is your Java code?

I do a lot of code review lately. LotusScript, JavaScript, Java etc. When I encounter code that is difficult to understand I'm usually met with the defence "But it is working". Funnily it is self-defeating. I never do code review (short of the we-have-fixed-it-now-have-a-look types) of code that works well. Asking us to do code happens because there are problems (I really would love to do a the-proud-code-parents-want-to-show-off-their-really-pretty-baby review once in a while). The typical problems are:
  • Poorly documented code
  • Lack of decomposition
  • Slow code (e.g. getNthDocument)
  • Lack of separation between user time and backend time
  • Lack of caching/object reuse (e.g. Connection pools, Profile Fields)
For LotusScript there is Teamstudio profiler. For Java you can use Crap4J. Crap4J is not a profiler but analyses your code complexity and coverage. Complex code translates into application risk. You get a nice benchmark and a detailed report on the individual methods, so you know how to fix it. Crap4j runs as an Eclipse plug-in. Go get it.



I've been asked by various people what I use to create the diagrams and illustrations in my blog here. I am using SmartDraw. It makes it easy for people like me (who know what they want to show but lack graphical talent) to create stunning resonably good looking business graphics. Have a look at some samples, their tutorials or follow their blog. My only grieviance is, that they don't have a Mac or native Linux version (and it doesn't work with Wine either). The other graphical tools I use regularily are Shutter for screenshots and Buzan's iMindMap for mindmaps. While SmartDraw does mindmaps too, iMindMap draws that wonderful organic connectors - and runs on Mac/Linux too. Occasionaly I also use Dia, Inkscape or Gimp.


How to explain "What is Server Virtualization"

When explaining server virtualization to non technical people the easiest way to confuse them is to introduce the trade lingo: host, guest, hypervisor and so on. Discussing virtualization for a while I came up with a good analogy: "Server virtualization is like car sharing". If you commute to work in a car: have a look out of the window. How many cars, perfectly capable of carrying 4 or more people, just transport one person?
No virtualization leaves server resources unused
(Think server = car) The same is true for many servers. They run under-utilized. The number of seats would be the equivalent of the I/O capabilities and the cruise speed (or horse powers) You can go through a number of phases in easing your traffic congestion (in IT: work overload for administration and IT budget):
  • Pool the cars: The equivalent of moving physical x86 server onto a virtual infrastructure, still on x86 architecture. XEN and VMWare lead the charge here. Of course if you have 20 people the car won't help. And you know what hassle it is to distribute a crowd into the available cars.
    x86 based virtualization uses resources better, but the container size stays the same
  • Take a bus: Move workloads (if suitable) to high end hardware like IBM AIX, SUN Oracle Solaris or HP/UX
    Unix Virtualization allows for more loads per server
  • Take a train: Replace all the boxes with an IBM zSeries. The z10 would be the equivalent of a MagLev train (short of the cost of course)
    zSeries Virtualization - IBM reduced several 10thousand servers to a few mainframes
  • Fly: In the clouds. Cloud computing is the logical end-point (keep in mind: flying is not the one-fits-all solution for your travel needs. Just try to fly to the grocery store down the road.) However Server Virtualization and Cloud computing differ in a few aspects: The cloud user is no longer aware of physical machines, they are completely managed by the cloud provider. A cloud provider would typically not offer a OS container (like the virtual server), but a specific execution service: storage (file and database), computation (application server) and presentation (web server with various protocols) or even higher level services like: data sync, ERP, CRM). Nevertheless virtualization is a step in making the cloud work for you.
    The Cloud as ultimate virtualization - if the bandwidth would allow it.
Advanced topics here: car pool management, bus maintenance, train scheduling (ask your z/OS or VMWare experts for that).
As usual: YMMV.

Update: There is a nice article on developeworks explaining more about virtualization in the context of Domino on Linux on zOS


Code Quality and Decomposition

I spend the past few weeks reviewing code in Lotus Notes applications. While the robustness of the platform amazed me, the code I saw didn't (short of that one form with 1932 fields and 212 @DBLookups).There seems to be a common lack of coding quality among corporate LotusScript developers. I hear "C'mon what is your problem? The code is working." I reminds me of the guy who jumped from the 88 story building stating "So far everything is fine" when falling past the 10th floor. Unfortunately very often the person who writes the code (being a contractor) doesn't have to maintain it, so bad engineering kicks in. When you write code you should make one base assumption: "The guy who will have to maintain your code is an armed maniac and has your address".Sadly the statement "It is working" neglects the most basic principle of software engineering: we don't write code for machine, we write code for humans to understand. Machines don't "understand" code, they just execute it. Enough of the rant, what needs to be done, what makes good code? In a nutshell: decomposition. Decomposition is a fancy word for breaking down tasks into smaller units until a unit solves exactly one problem. The recommended approach for this is top-down development. The Stanford Computer Engineering class uses your morning routine as example. The big task is "Morning routine" Morning routine can be broken down into: Get out of bed, morning hygiene, breakfast, get to work. These tasks can be further broken down, lets take breakfast as example (Stanford uses the morning hygiene, so you have 2 now): Prepare Breakfast, Eat Breakfast, Read Newspaper. Prepare Breakfast can be broken down to: Kiss wife, make coffee, make eggs, get juice, make toast. Make coffee can be broken down to: get water, get coffee powder, fill machine, boil coffee. Get coffee powder can be broken down into: get box with beans, fill grinder, grind, get grinded powder.And so on. Whatever programming language you use (even in COBOL) you could just write down in natural language what you do and then implement the sub routine:
Sub Breakfast PrepareBreakfast EatBreakfast ReadNewsPaper End Sub Sub PrepareBreakfast Do until sheIsSmiling sheIsSmiling = KissWife End Do MakeCoffee 2 MakeEggs 4 MakeToast 2 End Sub
This LotusScript was converted to HTML using the ls2html routine,provided by Julian Robichaux at
There are a number of tips (lifted from the Stanford lesson without asking) around decomposition:
  • Break your program down until a routine solves just one problem. That one problem: HaveBreakfast then gets broken down into smaller tasks. This is a subject of heated discussion, when you reached that one problem. But you can for sure tell, that acquiring a collection of documents, looping through them and manipulate one document at a time are 3 problems.
  • Methods are short. In the Stanford lecture a number between 1 and 15 lines was stated. I would say: a method (function, subroutine) needs to be readable on screen without scrolling (so your methods get shorter when you develop on a netbook)
  • Methods have good names. If you read your program out aloud it should tell the story. function1 function2 doesn't cut it. The computer doesn't care, but keep in mind you write your code for other people to read and understand. (OK, nothing beats this COBOL statement: PERFORM makeMoney UNTIL rich [Fullstop])
  • Methods have comments. What does the routine do? Any special considerations. What are the pre- or post- conditions. For LotusScript use LSDOC to generate documentation out of the comments.
  • Decomposition is valid for procedural, functional and object oriented programming.
You want to know more? Take the free course and read books about OO Design and Software Engineering or complete and beautiful code.


Before you blink the feature is there - add geo location information to your Ajax applications.

If you want to know where an IP address belongs to, you now can use a free service. As Volker lets us know, you could have the whole database at your (local) service. But why do all the heavy lifting if you just need an IP address from time to time? iplocationtools offers a simple REST API to query an IP address returning a XML or CVS result. The only thing missing was a JSON return value (and JSON *is* the flavor of the day isn't it?). So I asked nicely and about 7 hours later (given the difference in time zones: that is pretty instantly after getting the first coffee into your face), the interface is ready. Now I wish other ideas would be implemented with the same speed.


Second Thoughts

I'm having second thoughts about saying nice things about Microsoft., when reading this. Lets hope, /. got it wrong and they didn't fall victim to DRM, but an overzealous (Beta any one?) protection of executable files.
Speak after me: DRM is bad for the customer.


More on eMail Retention

eMail retention seems like a never ending story, even when thought leaders kiss it good bye. Ferris reminds us, that we must not let the discussion/decision about the applicable retention policies stop us from archiving eMail at all: "Defining retention rules is a continuous and complex process of prototyping and refinement. [...] If you wait until the rules are fully defined, you’ll never get started." Of course lack of action sometime seems intentional.


An old dog peeing on the rug?

"Microsoft is like an old dog that even when beaten won't stop peeing on the rug." - That's not my opinion, but a statement made at Microsoft Watch. While initially I had a good laugh, I then felt that statement is a little over the top. I certainly don't like the way they do business, but I recognize them as a formidable technology force. I don't think Microsoft ever was very innovative, but they have an important skill, lacking in so many IT companies: picking up underdeveloped ideas and making them easier to use for a broad set of users (and much more!) for developers.
When I compare the learning curves for VB.NET (or C#) and Java: It is so much easier to get started in VB. Just try: write a client application that shows a window with a menu with one label that says "Hello World". Of course learning curve is just one aspect, when it comes to higher functions the battle is on (and I'm biased, so I won't comment). Imagine how boring it would get, if there wasn't a heated competition, and yes - a superior backend isn't enough. So we (we as in the Lotus community) kind of owe Microsoft that she is around. So I won't call them "old dog" but "favorite foe".


Interesting Development Tools to watch

I come across a lot of tools I see potential in. These are my latest findings:
  • Pivot Toolkit: A Rich Internet Application Toolkit. Interesting: uses XML for the UI definition that can be loaded at runtime. Could supercede Thinlets if it runs on mobile devices.
  • JFreechart: Java Charts for everything. Works in Windows, Linux, Mac and in applets. And it works in Notes 8 clients as Julian has shown. Don't forget to buy the Developer Guide.
  • XML Diff and Merge. I had a deep look at a lot of XML files lately. The little tool from Alphawork helps to work on changes. For a fuller experience you can also look at Oxygen.
  • Not a development tool per se, but an interesting source of vizualisation ideas: ManyEyes.


Windows 7(?) Street Test

ZDNet Australia took the next great UI for a street test. Well I still like Gnome better. You like eye candy? Just look what Google thinks it is.


DogMind - Escaped from the Lab

IBM internally runs a Technology Adoption Program (TAP for short). TAP follows the principle: "Let us throw it at the wall and see what sticks", means: it is our breeding ground for innovation. All internal software and all products go through TAP before they graduate into "released software" or "good to use for all". I think this is very smart. I like trying new stuff and balance between leading and bleeding edge. Others are less adventurous. TAP allows me to try things within the boundaries of our corporate governance. Real nice things come out of TAP. One, wich I'm particular fond of, is a new visualization for Lotus Connections Dogear bookmarks. It is called DogMind. It is using a mindmap to draw on tags in Dogear to visualize connections.


The best way to fix software problems

The natural enemy of any software project is warped communication. To minimize damage all agile software development methods use short cycles and close interaction with the user. Over the year I learned (partly the hard way) that what users say hardly correlates with what user do. So the final verification usually happens during the User Acceptance Tests (UAT). The problem here: the project is mostly concluded when UAT commences and conflict between users and developers are guaranteed. I yet have to encounter a specification that wouldn't allow for explosive ambiguity (or is so detailed, that when implemented is 100% not what users actually need (memento bene: need not want).
Ideally users should be able to test software before any code is actually written. This is usually accomplished using prototyping tools. The catch: to get reasonable result the prototypes take almost as long to build as the real product. But there is a solution. Do this:
  • Use an open process to gather requirements. IdeaJam is great to collect and vet ideas from a broad basis of users. You only need to be careful not to run into the say/do or want/need trap.
  • Create low-fidelity prototypes. Paper is a good start, but hard to distribute. Balsamiq Mockups or Denim are suitable tools. Denim is good to visualize links between screens and flows, while Balsamiq Mockups shine when it comes to UI creation. In a perfect world Balsamiq would release an add-on for IdeaJam (wink wink)
  • Develop Effective Use Cases to define the interaction users want to complete with the system. (Use Cases have Pattern too).
  • Test the screens!
  • Test the screens: Let users interact with the mockups (you want to print them then) using Paper Prototyping (you want to read the book. or its newer cousin). The interaction reveals missing or complicated steps. If you find a missing item you can fix it in 3 seconds using a pen. Nothing beats that. It is great fun.
No idea how a session could look like? Nigel and I once recorded a session with a simple webcam (you see the tripod in the recording ), have a look:


Social Software Adoption

US President Obama is credited for his effective use of social software. Edelman just published a paper titled "Social media lessons from the Obama campaign" as part of their insight series. From the description: "Barack Obama won the presidency in a landslide victory by converting everyday people into engaged and empowered volunteers, donors and advocates through social networks, e-mail advocacy, text messaging and online video. By combining social media and micro-targeting in the manner that it did, the campaign revealed force multipliers that are already being adopted as part of a new communications model. In The Social Pulpit, Edelman’s Digital Public Affairs team in Washington, D.C., examines the tactics of this revolutionary campaign and what it means for communicators in a new era of public engagement." You can download the report for free, it makes an interesting read. I liked the proposed stepped approach dubbed "Crawl, Walk, Run, Fly"
Social Media Phases: Crawl, Walk, Run, Fly
You might be in for a crash landing when short-cutting the process.


eMail Retention Policies

David Ferris, principal and founder of Ferris research sums up The State of Email Retention Schedules. It seems to me, that a lot of organisations follow the motto: Ignorance is bliss. However when looking closer it doesn't look like ignorance anymore, rather confusion on many forces/interests pulling into different directions:
  • IT management likes to keep retention periods short. Short periods require less storage, less computing power to search and analyze the stored data and offer less data (read cost) that might get subjected to a discovery phase (which in Anglo-Saxon jurisdiction typically has to be paid for by the company)
  • Legal likes to keep retention periods short. Less data stored means less risk in a discovery phase.
  • Legal likes to keep retention periods long. Since the opposite party might be able to produce electronic communication, having retained the other end can help to verify if that exhibit is genuine.
  • Record keepers like to keep business records as required by law. Now this is a big discussion. Are eMails business records? This is actually the wrong question (it is the same as: is paper a business record - depends what is written on it). The right question: What emails are business records and how to (auto) discover their business record nature? Also: most acts covering electronic transactions require non-repudiation provisions. Means: emails (given their content makes them business records) need to be retained before users can touch them (for incoming) or after they are finished composing them (for outgoing). So retention ideally happens at the router using proper rules.
  • Knowledge Management likes to keep retention periods long. A lot of corporate knowledge is stored (or would "is hidden" be more accurate?) in email systems. With the right tools that can be harvested easily. However outdated information isn't KM relevant, so retention should not be too long.
  • User don't want to be bothered. They have enough work to do and want systems that are fast (which would call for short retention) and can produce any information (calling for long retention). In an ideal world the system would take care itself.
  • IT vendors love long retention periods. They mean: more customer attention, more budged, more consulting, more hardware. But well: dentists like rotten teeth too.
In any case: without an retention policy in place corporate management stays liable for any violation of compliance. With an implemented policy (where implemented means: defined, communicated, taught and enforced) it turns into the individual employees responsibility. One important aspect: I believe eMail has reached is zenith as corporate communication tool. Social software like blogs, wikis, discussion boards, team sites and instant communication (SMS, Twitter, online chat, etc) needs to be included in retention policies.
What is your policy?


How much Microsoft Tax do you pay?

Joe Wilcox of Microsoft Watch wrote an interesting commentary about the "Microsoft Tax" enterprises pay. Microsoft currently is flaking against Apple alleging Apple customers pay an Apple tax (which I would agree, but it's not money but loss of control, but that's a story for another time. The components of the M$ tax are:
  • Client-Access Licenses (CAL)
  • Software Assurance
  • Versioning (Pulling out features into new products or enterprise editions)
  • MDOP and Windows Vista Enterprise
Head over and read the full article. As with every tax: avoidance (in case of M$ tax: a.k.a pirated software) is illegal, however a good tax consultant can lower your burden. Microsoft's licensing options seem to be as confusing as your average state tax code. Why not call in the experts to lower your M$ tax?


Markup your E-Discovery

Bob Spurzem of Ferris Research notifies us about a recent publication around E-Discovery. In his blog entry he points to the Electronic Discovery Refeence Model (EDRM) group that announced in December its XML standard for e-discovery of electronically stored information. EDRM tries to address the headache you have preparing for eDiscovery by the multitude of propriarty formats your information lives in. While a lot of EDRM vendors jumped on that, it will be seen if that format becomes a standard. I surly like the better standardization and thus accessibility of meta data. On the other hand the major document formats are ISO standardized as well. So an interesting question: should one transit the propriety DOC, XLS, PPT formats to EDRM, OOXML or ODF? Notably absent from standardization so far, short of MIME, are email formats (MSG, OND) where EDRM could play a vital role.
I toyed around with DXL and it seems transforming a Notes document into EDRM is actually rather easy. ERDM XML has quite some activities planned for 2009, so keep a watch on the project website


Markup your Strategy

No this is no advice for consultants how to bill their activities to customers (you know that "Share your knowledge and I tell you what you know" type of activities). This is about the wonderous world of XML Schema. While XPath (together with XSLT) separates men from boys, knowledge of the various available Schemata and their use separates the knowing from the clueless. There are Schemata (I'm using the proper plural, you might find others referring to them as Schemas) for almost everything. Of course being the real world there are overlapping and competing specifications everywhere (e.g. OOXML vs. ODF for Office documents). There is even a definition in XML for your emotions.
A very interesting Schema is the Strategic Markup Language (StratML). From the definition: "The StratML standard defines an XML vocabulary and schema for the core elements of strategic plans. It formalizes practice that is commonly accepted but often implemented inconsistently. StratML will facilitate the sharing, referencing, indexing, discovery, linking, reuse, and analyses of the elements of strategic plans, including goal and objective statements as well as the names and descriptions of stakeholder groups and any other content commonly included in strategic plans. It should enable the concept of "strategic alignment" to be realized in literal linkages among goal and objective statements and all other records created by organizations in the routine course of their business processes. StratML will facilitate the discovery of potential performance partners who share common goals and objectives and/or either produce inputs needed or require outputs produced by the organization compiling the strategic plan, and facilitate stakeholder feedback on strategic goals and objectives."
To put StratML to a test it has been used to render the agenda of the incoming US government. Seems we are looking at a new level of transparency?


Beef up your programming skills - Take a Stanford course - free

After a number of years in the field we all think of ourselfs as "seasoned developers". A lot of us (like me) came from other professions into IT. I studied economics during national service (where I had to learn COBOL), went to law school and hold a certification as counsellor. I also went through IBM training in their internship program in the 1980ties (COBOL again, but also hardware, mainframe, midrange, PC and 1-2-3). Despite all experience it makes a lot of sense to connect back to the roots. After all computer science is a university subject. Tim Tripcony thinks so too. He notifies us that Stanford Engineering Everywhere offers a free course in Programming Methodology. You now can get Stanford Quality education from the comfort of your home, hotel room, plane seat. The download of lectures and materials is a whooping 20GB (I downloaded the MP4 version) in 28 lessons taught by Mehran Sahami. You start by downloading the course material and then the videos for the individual lectures. Being a good net-citizen you use Bittorrent for that. Ubuntu comes pre-loaded with Transmission as Bittorrent client -or- for any platform you can use Vuze/Azureus (a Eclipse RCP application) or make your pick from a long list.


Use Swimm Lanes to Document System Components

A common challenge in software development is to synchronize the different phases and stake holders in a development project. Business users care about the business functionality, infrastructure people about the system setup (servers, network, storage etc.), interaction designers about the UI, developers about code libraries and and and. Typically you have a different set of artifacts to document and cover the various aspects. While looking at the forest of information you might loose sight of the trees. How does a User requirement map into a story, a use case, a system module a piece of infrastructure? A neat way to show the connection between all these are swim lane diagram. Swim lane diagrams are a part of UML and typically used to show the flow between modules of a system. I'm using swim lanes to visualize application flow with the help of Sequence that allows me to type the flow rather than draw all of it. But the use of swim lanes is not limited to program flow. I have a great history book that uses swim lanes to show what happened on every continent over a time line. Back to software development. You can use swim lanes to document the development process and its components: story board, use case, feature, user experience, business process, tools and systems. Have a look at great example and the explanation around it, as well as some more thought and downloads. How do you get your story in sync?


You never actually ...

You never actually just run an Exchange infrastructure.
You merely take a short breath before the next patch cycle.

Sorry Patek



There is quite some turmoil around the OOXML voting as an ISO standard. To me it looks like the law of unintended consequences in full swing. I think the irregularities need to be sorted out and processes need cleanup (Do they?). The whole mess seems like a warped failure of communication between an Anglo-Saxon and Continental European view of the world (this probably will warrant a longer post somewhen else). In short: In an Anglo-Saxon view anything that is not specifically outlawed is OK to do. For the children of the Code Civile adhering to the intend of law and morals have equal weight. While paying marketing $$$ is not formally bribery, using it as an incentive to get partners doing things they never intended to becomes borderline.
Anyway my position on OOXML: I'm in favor of OOXML becoming an ISO standard, but not as fasttrack. It must go through the due process (which might take a while). Eventually it would end as an extension to ODF where ODF is lacking, which would be a good thing. But also as an independent alternate standard it would be OK. Key anyway is: due process not cut corners fast tracking.

08/12/2007 Browser Statistics

To prove or disprove popularity statistics have been a preferred medium of choice for millennia. Depending on the samples you take you will get different results. Looking at world wide market figures, Firefox gets 17.4%, 36% or 12.72%. The picture on this little blog is quite different:

Browser Stat: IE 50.63, FF 44.36

Firefox share on page views rose in the second half of this year from 36.70% to 44.36%. This is a 20.9% increase in just 6 month. At the same time Internet Explorers share shrank from 57.97% to 50.63% which is a 12.7% reduction. Given that the geekiest readers never hit this site but use the RSS feed I find it quite remarkable. The second half of 2007 also saw the arrival of some mobile browsers hitting the site.


I need a Dojo expert for 6 weeks in Singapore

For a project in Singapore we need a Dojo expert to design a webUI with high interactivity. We have the J2EE and Domino backend guys are in place. The project is interesting and will last about 6 weeks. Any takers?


Tech University Jakarta - Project Worst Practices

On Nov 06 2007 I'm speaking @ Tech University in Jakarta. My topic is "Project Worst Practices". This is not a competition to Bill's famous topic but a stab against lousy project management. My collection of despicable practices and beliefs is summarized in a Mindmap:
Project worst practices
Click on the image for the full size graphic.


Michael Vizard on Microsoft Exchange

Michael Vizard writes on Microsoft Exchange (emphasize mine)

"Within the land of IT, nothing is a bigger pain to own, manage and run than Microsoft Exchange. Everywhere you go customers have horror stories about the installation, maintenance and, above all, uptime of their Microsoft Exchange implementations. And worse yet, they will all tell you they are paying top dollar for the privilege because the expertise needed to successfully run a Microsoft Exchange server is some of the most expensive in the IT labor pool."


Software Pricing and Software Risk

Most of software created today is bespoke software. Code that runs in one organisation and is never resold or passed on (and sadly hardly reused). This entry is about pricing the creation of bespoke software. It is not about pricing of standard software. That is a topic others have to fight over. So what pricing models are out there? We make the simplifying assumption, that you have an idea what effort is required and that we can ignore market forces (like: "this is the price we pay for this service here").
At the two ends of the models are "time and material (TM)" and "turn key projects (TKP)". TM is the delivery model corporate developers typically work with (yes there are cost centers, so it might be different in your company), while TKP is the sole model used when software projects are tendered out.
In a TM model finding the price is pretty easy: effort + profit margin = price (remember we exclude the market forces in this examples).
Time and Material Pricing
The calculation is simple, since any change in requirement, any unforeseen complication just leads to more billable hours. Everyone loves this model... except the customer. In this model 100% of the risk is on the shoulders of the project sponsor. Naturally project sponsors or customer want to limit the risk. So they push for fixed deliverables and turn key pricing. This seems to be a sensible approach. However the way a contractor is calculating the price of the software becomes very different. First: the internal pricing is always TM since you pay your staff a monthly salary. So when accepting a TKP there is a substantial added risk that the contractor has to bear. Risk translates to money. So the calculation suddenly looks like this:
Price with risk
While it looks like that the profit margin took a dent, typically the risk margin is added to the project costing (especially when it is an internal cost center). So the real picture looks more like this:
Software Pricing and Risk
Since Risks can be expensive both sponsor and customer try to minimize the risk. The usual approach is to flesh out detailed specifications what needs to be done, what is included and what is out. These specification then are the ultimate benchmark to decide if the contractor has fulfilled the obligations and gets paid. In other words: the system is completely specified before work commences. With some notable exceptions it is consensus, that big up-front design doesn't work. This insight hasn't reached the teams that design tender specifications. In my personal experience: the systems users appreciated most when delivered where the ones that had the least in common with the original design specifications (but that might have been just my dumb luck ).


How should organizations implement virus protection?

Virus protection is a discipline of risk management. A 100% protection is neither technological nor economical feasible. When implementing virus defenses, an enterprise needs to determine its risk level and take action according to their perceived need for security. This need will not only be determined by internal factors, but also by governing laws and principles. To get started enterprises can turn to established guidelines like the ISO 27001. ISO 27001 certification can be used as a driver to implement a sound security policy.
Comprehensive virus protection for any organization needs to be implemented in layers and must be part of a more complete security and risk managing initiative. You can borrow the principles from the blueprints of the great cities of the middle ages: not a single but multiple walls, a ditch, guards at the gates, signal towers, nearby allies and citizens vigilance constitute their defense system. The number of layers to be implemented depends on the risk level determined beforehand.
To guard the "gates" a twofold approach must be taken: disallow known trouble makers to reach you and inspect arrivals carefully. The first task can be achieved using spam filtering techniques like black listing or content recognition, the second by using virus scanning and content blocking. Important aspect here: You should reject a message as early as possible. There is no point scanning a message content if it could have been rejected for trying to deliver to an unknown user in your domain or being send from an origination that is known to a blacklisting service.
Having a current virus scanner signature might give enterprises a false sense of protection, therefore it must be complemented by digital fingerprint based file blocking and quarantine to catch unknown harm. This way any executable content can be blocked and unknown maleware escaping the scanning patterns will be captured and blocked swiftly.
All "gates" need to be protected equally: email, instant messaging and individual PCs where removable or portable media could pose an attack vector. The signaling towers would be the notification system, that alerts all gatekeepers if one of the gates encounters an attack to improve the networks resilience. This notification feature must include the network protection layer (a.k.a Firewall), so an attacked or infected segment can be isolated automatically.
Citizen's vigilance can be achieved with meaningful training and regular updates on the security front. If every employee is able to identity a suspicious entry (mostly via email), the risk of an infection is lowered substantially. Finally, virus protection is no one time effort: scanning patterns need to be auto-updated, new thread sources blacklisted and employees updated on the latest developments in network attack and protection.

Spam is a very popular attack vector, so head over to Chris and learn about Domino SPAM fighting


To REST or to SOAP for mobile applications?

I'm toying around with J2ME applications running on multiple mobile devices. Since I want them to run on most mobile phones I settled on Midlets. One of the questions I was musing is how to do the data communication with the back-end. Despite the "chattiness" of the data I already decided, that I will use XML as the data format (less hassle with diverse sources) and web services as the delivery method (the telcos who charge per kilobyte bribed talked me into that).
To parse XML in a Midlet you can use kXML or kSOAP. A few questions remain I haven't found an answer yet. Maybe someone can point me to answers or hints:
  1. Can I use gZIP from Midlets?
  2. How to integrate SMS (to trigger a pull) with a Midlet?
  3. Should I use REST or SOAP for the communication? REST seems to be simpler but SOAP is better supported in Domino Designer.
  4. How do I integrate HTTPS into a Midlet?
  5. Can I access and transmit the Phone ID (IMEI) using Midlet?
A seasoned J2ME developer probably could recite the answers before breakfast, but they are new to me.


Did you Scratch today?

Teaching kids (especially mine) programming is on my mind for quite a while. The discussion on Slashdot about my query was very inspiring. However it didn't seem to show the right tool. Now, again inspired by a slashdot article, I found Scratch. Looks very much like fun to me:

Scratch Programming Language

Now they only need to add interfaces to Lego Mindstorms and the Picocricket and I'm a happy camper.


A Cost Analysis of Windows Vista Content Protection

Peter Gutman at the University of Auckland has written an analysis of Windows Vista's content protection titled A Cost Analysis of Windows Vista Content Protection. The Executive Executive Summary says it all: "The Vista Content Protection specification could very well constitute the longest suicide note in history".
It makes an very interesting read when you consider adopting Windows Vista be it at home or in your corporate context. The document seems to be very alive with footnotes and remarks being added. Since pages on University sites have a shelf live usually limited by graduation or employment I mirror the article in its status of today 21st May 2007.


XML / XForms / J2EE Job in Munich

My old colleagues in Munich are looking for a senior developer who will help them to develop UMsys the integrated environmental management system to its next level. You should have a sound idea about web standards, XML and Java. If W3C is your middle name, you can bet on a place on the short list. UMsys is running on a J2EE server and uses an XForms implementation "Orbeon Forms" provided by Orbeon as its main web interface.
Contact information can be found on the IN+ website.


The Future of Java in the Enterprise

Working for IBM has its special perks. One of them is direct access to a huge bunch of really smart people. As you might (or might not) know IBM is maintaining its own version of the Java JVM. In a recent chat with our researchers I could take a glimpse into the future of Java for the enterprise. Since there will be so many new features this upcoming version it will be named "Java Enterprise Edition Extreme" or short J3E (or as the researchers like to put it: J E power three). Based on IBM's version of Java 8 (we jump a few version to get ahead of SUN), it will not only feature all time favourites as Aspect Oriented Programming ( AOP), an multi-core threading optimized compiler (MTOC) but also a new persistence interface (code name "deep freeze") that persists Java objects into various open standard disk structures (my favourite being Linux EXT4 streams).
But there is more to come: IBM's processor unit will release new versions of the Cell and Power processor families, that feature a Java-On-Silicon JVM (JOS-VM) making Java execute without the need of an operating system. Lotus Expeditor will take advantage of this new abilities. Especially the upcoming Cell Micro processor with barely 0.1 Watt power consumption will be a hit for mobile devices running J3E applications ( IBM Websphere Portal on your wrist watch anyone?).
Of course Java Developers will have to get used to a few changes. D. Doligez from our research labs explained to me: "The biggest change is, that we had to let go of the venerable Web Application Repository (WAR) files. It simply can't deploy to all our target JVMs especially the JOS-VMs. We created a new format we call "Lean Object Versatile Extension" (LOVE) that will serve as container for both regular Java 8 as well as J3E applications. To create files in this new format we will not only support Ant and Maven but also the popular make utility".
IBM marketing is planning a media blitz to introduce J3E once it is ready for prime time. They have enlisted a well known artist to promote J3E with IBM's new Java 3E tag line.

Update: An early version of IBM's Java8 is available for download.


Vista dreams shattered

I wanted to know what it takes to run Vista. So I headed over to CNet's " Are you ready for Vista" Advisor:
Shattered Vista Dreams

Seems like I'm in for some serious hardware shopping. This looks more tempting every day. Anyhow I moved most of the data off NTFS onto a NAS and experiment with an alternative. My nice from Australia is here for a visit and after a week she hasn't figured out, that it is not Windows she uses to read here email.


No more "Everybody uses MS Outlook at home"?

Vista is coming, so is Office 2007. The brand new all shiny ribbon looks utterly familiar (OK, it was at the bottom of the window and less cluttered in 1998). It seems Microsoft has taken a few lessons from IBM (not only about software processes from the OS/2 team) and comes up with the most confusing licensing scheme ever. While the Vista versions can be understood (4 packages, take the biggest, have it all), the Office schemes are pretty confusing. 8 editions are available (if you are an enterprise, otherwise it's 6) which are all packaged differently. Most stunning: Home, Student, Small Business, Professional and Ultimate don't contain the Outlook mail client anymore. Is Microsoft ignoring their loyal student user base (hey they will make future purchase decisions!)? Or do they want to hook them onto Windows life? Also Small Businesses might not be amused. Also if you want to get as much features as possible you have to shell out a whooping 1078 USD for Office & Vista Ultimate (and end up using Groove for eMail). That might be even more than your Vista ready PC might cost you.
Read the full review yourself.


When licensing gets in the way of sales and revenue what will happen?

All public listed companies are infected with a serious virus of the type " You-must-grow-your-revenue-or-die". The stock markets punish "flat earnings" more severely than posting a loss. Well flat earnings could read: we made 10 Billion last year, we make another 10 Billion this year and voila your stock goes south. So every enterprise is looking to grow revenue. While acquisitions are very much in fashion to grow revenue, the core strategy falls into two categories: upsell your existing customer base and acquire new customers. As long as your market is expanding rapidly adding new customers is rather easy. Once markets mature adding new customers means taking them away from your competitors which is costly and tedious (including fending off their retaliation attacks). So "upselling your customer base" is the save bet for most sales organisations.
In software sales revenues are very unpredictable if you just sell licences. So every vendor tries to sell maintenance too. To help customers some offer to spread the up-front payment for licences over the period of a maintenance agreement (typically three years). After that period the licence is paid you only need to continue to pay maintenance and support which would be around half. So once you look at a six year period your cost look like this:

The Revenue GAP

Of course: your cost is the software vendors revenue. Uupps. Didn't we just conclude, that "upselling your customer base" is the easiest way to increase revenue? With that little "spread your payment" option the vendor has build in a gap they need to fill with new products. As long as you create new offerings, that create additional value for the customers, that actually might work. But what do you do if you just want to ship an upgrade to the product the customer has under maintenance? One way is to split your licences into "Standard" and "Enterprise". Then you explain to your customer: "What you have under maintenance is equivalent to the new standard edition, all the new features you cherish are part of the new Enterprise edition". Of course the Enterprise edition requires a new licence -- and hooray your revenue gap is plugged. You even might get away with it. Your customer is used to have paid 100 for the last three years, so they probably have forgotten, that IT cost are supposed to go down and have budgeted 100 for year four to six. With the myopic view on quarterly results and high job rotation that is not difficult to imagine (no Samsung product needed here).
However your customer might start to remember and really starts to run the numbers and ask some questions. My favourite: "Why do I need maintenance on the operating system? We keep desktops for 3 years and don't upgrade them. 100% of our machines use the operating system our hardware vendor had preinstalled."
Then finally your customers might listen to Michael Sampson.


What do you think is the most important skill every programmer should posses?

Linus says:
It’s a thing I call „taste”.
I tend to judge the people I work with not by how proficient they are: some people can churn out a _lot_ of code, but more by how they react to other peoples code, and then obviously by what their own code _looks_ like, and what approaches they chose. That tells me whether they have „good taste” or not, and the thing is, a person without „good taste” often is not very good at judging other peoples code, but his own code often ends up not being wonderfully good.

There are other great questions and answers from the Olymp of our developers in Stiff's blog


Developing OpenSource Software is fun (allways) and paid for (sometime)

Benno Stoll just published his master thesis at the University of Zurich in Switzerland. He researched the motivation of OpenSource developers. From the summary:
42% of the time spent for open source is financially compensated. However, we have to take into account that these figures may underestimate the amount of paid work. Paid open source developers are members of well-known open source projects. Such projects, however, can afford their own project infrastructure and are not dependent on platforms like SourceForge. Thus, the share of paid open source programmers may be rather underrepresented in the study's sample. (...)
In view of the importance of fun, the present study yielded the following results:
  • Fun matters: a simple model containing fun and spare time as independent variables can explain roughly 27% to 34% of the engagement for open source.
  • Spare Time matters: the amount of time spent by open source developers is significantly determined by the quantity of spare time the programmers have. However, the availability of spare time does not matter if the open source developers are asked for their willingness for future activities for open source.
  • The joy of programming does not wear off: each additional unit of fun is transferred linearly into additional commitment.

Go read the full report


MS Exchange Administrator (With Experience on LINUX)

Is there something coming we don't anticipate? This made me smile.


Better Java with Checkclipse/Findbugs

Eclipse does a good job pointing out Java syntax errors. To prevent bugs at the coding level you need to go much further. Two utilities make your live much easier here. One is Checkclipse, the other FindBugs. Checklipse runs as an extension to the Eclipse syntax checker and encourages you to write proper code (including white spaces between symbols, so "a+b" is wrong but "a + b" is OK).
FindBugs can run stand-alone (via command line, Java Webstart or Ant) or as Eclipse plug-in. It provides even more checking options.

Found via SDMagazin/Holub.

And while you are on it, why not test Websphere (unless you live in a restricted country, which SF thinks includes Singapore)?
Websphere Community Edition


Microsoft Exchange for DB/2

Now that I'm inside, I have access to a new wealth of information. Today I got access to a draft for a press release: In a surprise move IBM and Microsoft announce today:

Microsoft Exchange for IBM DB/2

The IBM spokesman declared: "With our scalable and robust storage technology we finally can ensure that you never have to spend time in Exchange hell anymore. Also true Microsoft shops can finally benefit from IBM technology. The code name for the project is TBE = True Blue Exchange".
When questioned about the move a long serving Microsoft engineer stated: "With all the pressure and monstrous code in Vista, Office 12 and Exchange 12 we remembered the good times when we were working with IBM on OS/2. Basically then we were playing Mimesweeper and Flight Simulator while the IBM coders did all the work. The only thing we did was to put the presentation manager GUI coordinates upside down to ensure the Lotus people would use up all their resources to recode 1-2-3 on OS/2 while another team was working on Windows 3.11. We wanted the good times back, so we handed over the complete API spec to the
IBM Almaden lab, of course not before they ensured us not to deliver it to the Europeans, since it is our policy to ignore court orders".
I was puzzled, so I contacted the Alpha team @ IBM Almaden. The lead architect confirmed: Microsoft Exchange for IBM DB/2 does exist.
With a big smile he added: "If you have a close look at the code, you will see, that it is actually a Domino 7.1 server in disguise. We had the messaging part, the DB/2 storage, the fault tolerance and the scalability. We just merged our Domino Access for Outlook into the Domino core and off we went."
I got my hand on some Alpha code. It is amazing. All MAPI calls work, Outlooks sees a native Exchange box, even active directory recognized an Exchange12_01042006 version. Once I've played more with the code I'll post my test results, so stay tuned.


Loosing patience

Hello Mr. Robot @, you belong to 209-128-119-045.BAYAREA.NET and it is completely pointless to add empty comments to this Blog.


When you buy software before breakfast, Zonelabs is a good shop

We have a new member in our IT Zoo. A nice Sony VGN-A17GP laptop. After reinstalling the OS from the rescue disk and one gazillion patches there was the question of Antivirus/Firewall. Since the desktop runs happily with Zonealam pro with Antivirus I wanted to order a licence for the Laptop too. To confuse me product is now called ZoneAlarm Suite and ZoneAlarm Pro doesn't come with Anti-Virus. Before breakfast I only skimp pages and I promptly spend 50 bucks on the wrong product.
So I headed to the customer feedback page, actually not expecting much, but to my surprise after they had their breakfast I got a mail stating, that they refunded my purchase and I'm free to order what I originally intended to do.
Well done Zonelabs!


XForms - Putting your form processing on steroids

On 15 March 2006 I will speak on the Singapore XML Standard day organized by the XML Working Group of ITSC. After introducing the XForms concepts I will show how the Orbeon Presentation Server allows for easy implementation of a server side forms processing application. Get the full details here.
If you are free in the afternoon -- and free -- join me at Tower Three #14-00 Suntec City in the multipurpose hall of IDA.


I could use a little help from an NSIS expert

DominoWebDAV 0.1 will be up soon. Part of the application is a little helper application, that knows how to deal with the webdav:// protocol. As "good citizens" we want to make the information available, that the helper is installed by amending the user_agent string in the browser. We add something, not alter, so the typical browser sniffing scripts won't break (Note to all who still use them: start sniffing the DOM).
In Internet Explorer the user_agent extension are defined by a registry entry (This is e.g. how IE/IIS "knows" that a .net runtime is installed):
HKLM "Software\Microsoft\Windows\CurrentVersion\Internet Settings\5.0\User Agent\Post Platform" "Some funny string" All entries here end up in the user_agent. In Firefox it is a little trickier: the following steps are needed:
  1. Is Firefox installed?
  2. Locate the profile directory (per user)
  3. Read the profiles.ini and locate the current user profile
  4. Look for the file user.js, create it if missing
  5. Add the line user_pref("general.useragent.extra.funnySoftware", "Some funny string"); if missing
I managed to get the part with the Registry going, but I could need some help on the Firefox part. Any takers?


Become a Java Blackbelt

Do you know I recently discovered this site and I like it a lot. There you can take exams about Java and related technologies like XML, Eclipse etc. While commercial sites charge you a fee for taking exams, Javablackbelt doesn't want you money. They want your contribution. So before sitting down to take an exam you need to accumulate points to "pay" for it. Points are collected by reviewing questions, contributing new questions or commenting on existing topics. Questions, edits and comments need to be accepted (see the site for details) to score. This way you get a double bonus: not only you can test your knowledge, but also you are required to crack your head what would be a good question to ask. Clearly the site doesn't aim at greenhorns (which I suspect is the group I still belong to regarding Java) since I couldn't figure out how you could contribute/benefit with only little knowledge


On dates, time and time zones

Lotus Notes calendaring is fully aware of time zones. It allows to enter and view appointments in different time zones. So it seems to be the solution for a small planet. If there weren't users like me . I'm juggling time zones every day - for remote and online activities: phone calls, sametimes, instant messages and skype. However for physical events I switch back to "no time zone mode". So it is "We meet online 1pm your time/8pm my time" and "We meet 7pm at the gym".
When I was planning my Lotusphere attendance I was in "no time zone mode", since it is obvious WHERE the sessions will be (at least from a time zone perspective). When I switched my gadgets (Laptop, PDA) to Orlando time, I had some "Session is starting" wake-up call pretty much 11h 10min instead of 10min before. So obvious from a users (that's me) perspective, there is a time, that is neutral to the time zone. E.g. lunch is at 2pm, whatever time zone that might be. Would a check box "Don't change time on time-zone change" in the calendar help? I'm not so sure, since calendaring is already complex enough. What a petty, that Swatch beat never took off (At least they could fix their unusable design).


From command lines and URLs


The line between URLs and fileshares are blurring. In any browser you type file:/// to open a file. In Konqueror you type smb:// to address a windows share and most file open dialogues allow to specify a file location starting with http://. However there are subtle differences. Unless your web server supports the webDAV protocol chances are high, that the files will be read only. MS Exchange and MS Sharepoint both use webDAV (This might be the reason why it is depreciated in Exchange 12 - you love to rewrite your apps anyway).
A big headache still remains. When clicking on a link in the browser, the document behind will be downloaded to temp and then opened. Even if you could write it back, your Office application wouldn't know where to write it to. So the drill for e.g. Sharepoint users is: "Right Click - Open With". You can count the support call logs......
I'm working on a strategy to solve that. A tiny helper application webDAVhelper.exe will "listen" to webdav:// URLs, look for the application matching the file extension and then call this one with http:// on the command line. so the URL webdav://myserver/files/BigFatCalculation.xls translates to excel.exe http://myserver/files/BigFatCalculation.xls.
Works like a charm... almost.
In fact we are using the command line and hope that an URL would work. So far it looks promising: OpenOffice and Microsoft Office master opening the files very well. Microsoft Office even issues a webDAV lock to hold on to the file. Paintshop Pro 8, Winzip 9SR1, Acrobat 6 all fail to take an URL on the command line, they only allow to specify it in the File - Open dialogue. I will investigate a little more. Nevertheless it is quite strange, that reading the file from the command line has been implemented differently from using File - Open.
In case you are interested in the little tool, drop me a note.  


Off to Pakistan

"Bridging the digital divide" will be the keynote address I will deliver on the ConnectIT Conference in Karachi/Pakistan on Friday. I'm speaking on behalf of the Association of Telecom Industries Singapore (ATIS).


User Centred Design --- without the users?


I'm a member of the Usability Professionals Association as well as a member of the World Wide Institute of Software Architects. Both communities discuss the need for user centred design. Looking at our favourite development process (make your pick: XP, Agile, RUP or whatever acronym you fancy) the users' requests always come first. While I think this is an important concept I sometimes feel uneasy about it. In most of my software projects we end up with features and design the users haven't asked for but highly appreciate. Actually I consider implementing exact user requirements as a failure.
Kathy "Headfirst" Sierry sums it up nicely in her latest post:
The goal is to add sliders that turn out to be really important to users. And I say "turn out to be", because the most daring breakthrough products and ideas are rarely driven by user requests.
Go read it.  

Update:Thx to Ganapathi to point out the typo, fixed now.


Apprentice, Journeyman and Master

You might have wondered why I use this metaphor in the sidebar to describe what I offer. I always though to write a long rant about it. Now I don't need that anymore. Jeff Atwood points to a blog entry by Rob Walling:
... a long text you want to read for yourself ...
The Bottom Line
Training is critical to any company that writes software, and apprenticeship is the best way to bring new developers on board, make them feel at home, improve their skills, and keep them happy and growing. You'll keep experienced developers in touch with new approaches, compliment them by asking them to share their wealth of knowledge, and hopefully create a few friendships along the way.  


Craftsmen vs. Tools


Seth Godin's latest Blog entry is titled Tools vs. Craftsmen. He describes how the prices for creative tools came down substantially. His conclusion is: " The bar's a lot higher, because access to tools is a lot easier". We have that situation in software development since Microsoft shipped BASIC with the first PC. You can download Java, Eclipse, #Develop, .NET SDK or Ruby for free. We got a lot more software since then and eventually the bar got lower (if you see and use crap all the time, you get used to it). However he is right: The bar got a lot higher if you want to be outstanding.  


Heart(disk) transplant*

My laptop hard disk was running out of space and also starting making funny noises. So it was time for a new one. First I though to reinstall everything from scratch. This would take some days time (actually hours squeezed into spare time over a few days) which I can't afford right now. So I started to look for an easy way out. I found TruImage from Acronis. It allowed me to clone my hard disk partition to the new drive that I had attached temporarily through USB. It also allowed to resize the partition in that process. Since my laptop has only USB1 copying the partition took a few hours (unattended). At the end TruImage shut down the PC, I flipped hard disks and now I'm a happy camper. Acronis provides a 14 day trial, so for a one off exercise it is free. TruImage does backup too, so I consider buying it.

* The pun does only work when spoken, not written.


Creepy coincidence

> I've not had paypal phishing emails for a while (thanks to Chris' counterspam). Today I went to paypal for some stuff. Half an hour later a pal-phisher mail comes through. Coincidence or tip of an ugly iceberg?


Effectiveness or Best Practises - make your pick!

Michael W. McLaughlin summarizes Best Practises (via eLearningPost):
  1. They rarely work
  2. It's a follower's strategy
  3. Change comes from within
  4. They don't come with a manual

One of the Lotus egg heads once added: "Best Practices are yesterday's technology".
The MIT recommends to replace them with Signature Processes.

I would add my own little attribute list:
  • they are an attempt to contain fear
  • they stifle innovation
  • they won't provide a safety net
  • they can't replace skills


Crystal Clear

Processes and methodologies are all the rave in IT. Coming from a RAD Domino background a lot of the process steps feel quite overloaded (what the heck is a build and integration test, when I just click save on my form? - Don't tell me, J/Nunit and (N)ant are my friend). I finally found a methodology, that seems well fit for Domino projects. It is called Crystal Clear and is described by Alistair Cockburnis his book "Crystal Clear".

Crystal Clear requires 7 properties of which 3 are mandatory. It is a methodology light on processes and big on principles. Alistair clearly highlights, that process is never a guarantee for success, it is *skilfulness* that will make your day. Processes only emphasise your skill levels, so if your skills are lousy process will make the result *very* lousy.

Crystal Clear

Go try it.


Meeting Bill Gates

What do 3000 IT professionals do together in one hall in Singapore on a Friday afternoon? Our host claimed it was the biggest professional IT crowd ever on the Singapore scale. Well... listening to Bill Gates, who gave an unexciting outlook of things announced over and over again.
So our lifestyles will be digital, our gadgets converge, computer recognize context, understand our voice commands and Microsoft's 6 billion R&D spending will make all this possible. I think it's increasingly difficult to come up with something visionary that has not been spoken about before.
Implementing visions seems is so much harder than having them....  


On Demand Software

Q: Define "On Demand Software"
A: "Make your technology suffciently complex, so you want to outsource it to IBM!"  


Our civilization runs on software

From the " Handbook of Software Architecture":

" Software is invisible to most of the world . Although individuals, organizations, and nations rely on a multitude of software-intensive systems every day, most software lives in the interstitial spaces of society, hidden from view except insofar as it does something tangible or useful.

Despite its transparency, as Bjarne Stroustrup has observed, 'our civilization runs on software.' It is therefore a tremendous privilege as well as a deep responsibility to be a software developer. It is a privilege because what we do collectively as an industry has changed and will continue to change the world. It is a responsibility because the world in turn relies on the products of our labor in so many ways".

Go read it and then rethink development resources and project management.


I want a better browser for my Tungsten

I'm quite happy with my Tungsten T3. I sync to Notes using Pylon Pro and run some custom Notes databases. I use Agendus as a beefed up contact and appointment manager and Verichat for instant messaging. I develop applications with Simplicity and the J2ME Plug-in for Eclipse. I even started fancy stuff using thinlets (you might need to go to the Yahoo group to get the midlet version). I even did some XML over HTTP stuff using kXML.
My only grievance is the browser. It is OK to see some basic stuff, but I can only jealously peek at Nokia phones, Psion organizers etc. who use Opera. If you share that grievance, head over to Petition Spot and sign the petition " Opera for Palm OS".


Sunday afternoon thoughts: Craftsmanship

These days software is considered an industrialized product. One has clear processes, a proven methodology and a team of well trained IT people (architects, analysts, developers, coders, testers etc.). Still many projects fail spectacularly. Hug Macleod spells out the truth (while in a different context): " This isn't a record store. You can't just hire a bunch of college kids whenever there's an upswing." Software, despite all processes and tools is a craft. I haven't come across a craft that does not require apprenticeship and devotion to learn the spirit of it. Even with the greatest tools you will find the moment where you need "some magic happens here". Katie dissects RUP and shows that little gap.

A lot of blame for the lost art goes to the MTV generation with the need of instant gratification (why learn how to operate a parachute if bungee jumping seem to offer the same kick). Blaming the youth is a sport that's popular for thousands of years, so I don't buy it. Digging a little deeper I find a very different reason: fear of failure: we have to be perfect on the first shot. We are embarrassed if we fail. Companies only hire the top performers (and who develops them?). I think the solution for the dilemma is to put more focus on the craft, listen more and be ready to improve one step at a time.


Brunei InfoCom Technologies Awards 2004

Together with the Bruneian company Teleconsult Snd Bhd I architected the eLearning platform ePLATO earlier this year. In Brunei we were using the platform to have what I would call "narrated eForms". Complex online eGovernment forms are broken into pieces and filled in as part of an eLearning course. At the end the participant knows about the how and why of his submission and has a filled in form ready for processing.
This unique concept of blending learning and citizen-government transactions has won the Brunei InfoCom Technologies Award 2004 (BICTA 2004) in the category eGovernment.
News coverage is a bit patchy, there are two online articles that both fail to name the winners: I had to resort to the good old scanner to document it:


Meeting Steve Ballmer

Steve Ballmer What do Richard Stallman and Steve Ballmer have in common? A lot more than meets the eye on first sight:
One: They are each others nemesis. They don't name each other. Stallman is "the guy who wrote GLP" and Ballmer "The big monopolist".
Two: They both came to Singapore in November 2004
Three: They both firmly believe in what they talk about and have a hard time acknowledging each others view
Four: They both would need some dress code advise. While Richard was talking barefoot, Ballmer was wearing brown shoes to a otherwise flawless dark business suite

Ballmer and the whole Microsoft crew was talking about value creation and shifting IT spending from maintenance to more productive work like new development. Of course the new Microsoft solutions will help there (so they claim). While I applaud that goal (who wants to do maintenance anyway) it looks to me, that savings in maintenance budget will rather result in budget cuts.... and I don't know (outside the consultants scene) much successful administrators in development roles.

Ballmer claimed, business value is the centre of their universe: anticipating business needs and creating value before the business community even would know about this need (or even create the need). Microsoft, so Ballmer, is committed to innovation which is documented in the 3000+ patents Microsoft will file this year alone. Microsoft's vision are agile enterprises driven by agile high performance teams (and Microsoft solutions of course).

On OpenSource and Stallman Ballmer got very firm: they don't believe in Intellectual Property (which is true, Stallman says IP is FUD and distinguished between: copyright, trademarks and patents as unrelated right) and all Open Source users are in jeopardy because law suites will hit them, with Linux alone violating 200++ patents (he mentioned an exact figure with "more than" in front, which is a contradiction in terms).

So what a difference: Stallman on one site: believing in freedom, community and sharing, Ballmer on the other side believing in vision, responsiveness and innovation. My conclusion: they both are right and they both are wrong. It is our task to find a balance between community and commerce. For my taste commerce has the much better lobby (see a future post about pirates).

But surprise, surprise even Microsoft is not alien to the concept of free (free as in beer) software: every participant returning the conference evaluation form was rewarded with a free copy of Microsoft Office Professional 2003.

Steve Ballmer obviously was on high power stress, observing his body when taking questions I saw the muscles stiffing in defence. Only when I asked him how he relaxes from his job, he visibly let go and appeared much more relaxed. Besides the usual stuff like family, sports (running and Golf) Steve mentioned reading eMail in Singapore Airline's First Class would be quite relaxing. So the notion stands: Singapore Girl you are a great way to fly


Meeting Richard Stallman

Stallman and the NotesSensei
The Singapore Management University (SMU) invited Richard Stallman to speak about Open Software for developing countries and about software patents. The session was co-organized by the UNDP, who takes a clear stand pro open software. Stallman is a very entertaining speaker who advocates his message with great passion and humour. He made the audience including me laugh quite a number of times.
Stallman stressed, that free software does NOT mean free as in beer, but free as in freedom. He highlighted, that the English language seems to be poorly equipped to distinguish this terms and that local languages seem to be more suitable to express the difference of these concepts.
Stallman himself seems to personify the nemesis of any sleek software executive. He was standing barefoot at the podium exposing manners of personal hygiene that, measured with European middle class standards, are rather questionable. Either he didn't really care or it was a very carefully crafted performance (maybe he secretly wanted to be a member of ZZTop ).
Intellectually I think Stallman is brilliant.
The points he raised were well crafted and presented using strong metaphors and on-the-dot explanations. Stallman lives software development, so when he explains the 4 freedoms free software is about, he counts from 0 to 3. This surely earns him points with the developer community, but make communication with people who don't understand or care for developer's lingo more difficult.
In a nutshell his stand is, that software should be supported and paid for by the community of developers and users and money be made by customization and support. Commercial propriety software vendors in his are land lords who are only interested in a rent and want to exploit the users by making them dependent. I only partly buy that argument. It lived off the fiction, that most of the users would be able to articulate in a programmers compatible way how they want software to work. Stallman himself is the living example, that development and progress are often (if not every time) are driven be spirited individuals. Also it does not match with my previous experiences. However I'll try to summarize his points. 

29/10/2004 --- me too.

I started to use to share bookmarks. I like the ease of posting and the cross reference.
Happy sharing!


Feedburner for --- update your URLs please

Following the discussion about bandwidth consumption I decided to hand over my news feed to Feedburner. So please update your Blog readers and point my news feed to stw


New book about Sun Java Studio Creator

My friend Sachiko Hirata has released her very first book. Together with two colleagues she is covering the Sun Java Studio Creator. Congratulations Sachiko! In case your Japanese is as patchy as mine, try babelfish's help.


Dante's Exchange

Seems like the messaging war between Domino and Exchange has got a new high profile twist. David Gewirtz, Editor in chief has just published his "Thirteen days in Exchange Hell". Read for yourself. Looks like the entanglement of your messaging server into your operating system and a multitude of dependencies can turn into a nightmare. Does he look here now?


Lunch with VB.NET 2005

Microsoft Singapore invited me to have lunch today with Jay Roxe and Matthew Gertz. They are responsible for VB.NET 2005. We lunched at the Conrad Hotel in a small group of just 12 people. Matthew and Jay where very eager to hear and learn, what their customers expect in the next release of their products.
While you may might dismiss this a M$ style marketing stunt, I hat the very clear impression that they REALLY care! What a refreshing difference from the usual "dark empire" stuff we read about Microsoft. (Being the Lotus Domino expert it was my role to represent a dark empire then <vbg>).
We touched a lot of interesting topics. Two I'd like to highlight. Did you ever wonder why in .net everything is an object, straight forward, no brainer, simple rules? Or why Ctype() does a lot of checking to make your life really simple? Well Jay did that -- after he was lecturing at Singapore's NUS for a year (make your own conclusions).
The second: Matthew raised an interesting question for the future direction of development environments (VS 2005 will feature hints how to correct coding errors borrowing the technology from Word's grammar check -- and the idea from Eclipse (?)). His point of view, which I second, is that code alone (as seen in notepad) doesn't tell you the story any more. Besides code there is increasingly META data (coming from your UML tool, your code history, requirement analysis) that is as important as your programming statements. So he is thinking to add additional files hosting meta data or merging code into a big (XML based) meta data file.
And then he very briefly lifted the cover what they are playing with (eventually I get shot for that): Why is there a distinction between code and layout? Couldn't you edit code like you edit a word document? You would write code and have your form visible as graphic like you embed a graphic in a word file; your code that talks Web Service or ADO.NET is represented as a diagram; little boxes with comments (like the Word 'trace changes" function) point todo's, comments, implementation hints to the exact code position. Think rich composite document. Of course you could switch between textual and graphical representation for each block. Normal code could e.g. be displayed as a flow diagram (I you need that for your documentation today, check out Visustin). I think this done well will boost productivity double digit.
I'd love to see that side of Microsoft more often.  


Not your Mom's Yahoo anymore... what CSS can do.

Yahoo is moving to a version of their portal. This is an important step in promoting web standards. I showed the beta page to a number of clients with CSS switched off to teach them about structure and to get all the aahhh and oohh, once it is switched back on:

What a difference the separation of concerns (structure vs. design) can make! The beta site is widely discussed in the web designer community (
1 ) ( 2 ) ( 3) ( 4 ) ( 5 ), so expect more to come.


Linux is calling


I want to find out if Asterisk is a solution to cut our notorious high phone bill (our here means: my business partners all over SE Asia and Europe and me). So I got a new hard disk and downloaded the SUSE 9.1 Boot CD from Germany (the local mirror was off-line when I tried). The installation seemed quite straight forward. As before my network card wasn't recognized (seems to be a "feature" when you have the boot CD only). The card wasn't listed either. So I picked one that sounded related and it worked. Since I'm curious what 9.1 can offer I selected everything to be installed and picked FTP installation from the SUSE FTP server (which is located in Franconia, a province of Bavaria, Germany). It turns out, that this is a stress-test on my broad band connection.
I started the installation this morning 07:30am (before bringing the kids to kindergarten and going to work) and now 23:00 installation has completed just 61%. So the whole exercise will take about 24h. Lesson learned: wait until the local mirror is back. Still it is amazing how data transmission has changed. It seems just yesterday to me when we used 300 Baud Modems and Kermit and got all exited if we managed to transfer a 10kB file.  


If Architects Worked Like Programmers

Found (and copyright?) here:

Dear Architect,

Please design and build a house for me. I am not quite sure what I need, so please use your discretion.

My house should have between two and 45 bedrooms. Make sure the plans are such that bedrooms can be easily added or deleted. When you bring the blueprints to me, I will make the final decision on what I want. Also, bring me the cost breakdown for each configuration so I can arbitrarily pick one.

Keep in mind that the house I ultimately choose must cost less than the one I am currently living in. Make sure, however, that you correct all the deficiencies that exist in my current house (the floor in my kitchen vibrates when I walk across it and the walls don't have nearly enough insulation).

As you design, also keep in mind that I want to keep yearly maintenance costs as low as possible. This should mean the incorporation of extra-cost features like aluminum, vinyl or composite siding. (If you choose not to specify aluminum, be prepared to explain your decision in detail).

Please take care that modern design practices and the latest materials are used in construction of the house, as I want it to be a showplace for the most up-to-date ideas and methods. Be aware, however, that the kitchen should be designed to accommodate, among other things, my 1952 Gibson refrigerator.

To ensure you are building the correct house for our entire family, make certain you contact each of our children and also our in-laws. My mother-in-law will have very strong feelings about how the house should be designed, since she visits us at least once a year. Make sure you weigh all of those options carefully and come to the right decision. I, however, retain the right to overrule any choices you make.

Please don't bother me with small details right now. Your job is to develop the overall plans for the house; get the big picture. At this time, for example, it not appropriate to choose the color of the carpet. However, keep in mind that my wife likes blue.

Also, do not worry at this time about acquiring the resources to build the house itself. Your first priority is to develop detailed plans and specifications. Once I approve these plans, however, I would expect the house to be under roof within 48 hours.

While you are designing this house specifically for me, keep in mind that sooner or later I will have to sell it to someone else. Therefore, it should appeal to a wide variety of potential buyers. Make sure before you finalize the plans that there is a consensus of the population in my area that they like the features of my house.

I suggest you run up and look at my neighbor's house he built last year. We like it a great deal. It has many features we would also like in our new home, particularly the 75-foot swimming pool. With careful engineering, I believe you can design this into our new house without impacting the final cost.

Please prepare a complete set of blueprints. It is not necessary at this time to do the real design since these blueprints will be used only for construction bids. Be advised, however, that you will be held accountable for any increase in construction costs as a result of later changes.

You must be thrilled to be working on a project as interesting as this! To be able to use the latest techniques and materials, and to be given such freedom in your designs, is something that can't happen very often. Contact me as soon as possible with your complete plans and ideas.

P.S. My wife just told me she disagrees with many of the instructions I've given you in this letter. As architect, it is your responsibility to resolve these differences. I have tried in the past and have been unable to accomplish this. If you can't handle this responsibility, I will have to find another architect.

P.P.S. Perhaps what we need is not a house at all, but a travel trailer. Please advise me as soon as possible if this is the case.  


WIFI in Malaysia --- and Another Exhibition


For 2 days I'm exhibiting ePLATO in Kuala Lumpur, Malaysia. The complex the conference cum exhibition is located also features a resort hotel, a night life spot and a huge shopping centre. Starbucks Grande Latte goes at USD 2.50 vs. USD 3.05 in Singapore. Our booth is not very spectacular, however on the first day we collected a number of excellent leads. It seems our regional focus and the mix of product and services looks attractive to our clients.
I can't comment on our hotel, since I don't care for hotels very much. My colleagues are pleased with location, rooms and service. Internet per day goes at 6.60 USD. I can use my room TV, my laptop through a 100MBit connection (local) or via WiFi at the convention centre.
Being curious about WIFI offerings I went and checked out the local providers: The bad news: WIFI for roaming users (read: using IPASS) is 6.5Cent a minute. The good news: if you have a colleague with a MAXIS phone subscription (s)he can send a SMS and get access for a whole day at 1.32 USD. This is quite reasonable. Gaining access for a week goes at 5.27 USD, a month sets you back 10.00 USD and a (prepaid) full year is 96.85 USD. This translates yo just 27cent a day. For this money I not even get 5 minutes access in Singapore. With a pricing like this WIFI usage is fun. An in deed every second table at Starbucks had a laptop on it.


Presenting on eLearning

Today I will present our new project ePlato EduWare System to Microsoft and their business partners.
eLearning is all en vogue in Singapore right now. However your mileage there does vary quite a bit. So my opening slide is this:

A fish stinks from the head


GSM Command line

I recently switched from Nokia to Ericsson with my mobile phone. So I got confused with the menus and where to find stuff. Then I remembered: GSM Phones come with their own command line. Every setting, that is carrier side (like call divert) has a shortcut you can type in without digging into the logic the UI designer of your phone had in mind (or hadn't). My personal cheat sheet:
Command line Function
**004*<TargetNo />#<SEND /> Activate all conditional diverts
##004#<SEND /> Deactivate all conditional diverts
**21*<TargetNo# /><SEND /> Divert all calls to <TargetNo />
*21#<SEND /> Divert all calls to previously set number
*#21#<SEND /> Status of unconditional divert
##21#<SEND /> Deactivate call diverts
**61*<TargetNo />#<SEND /> Divert when call not answered
**61*<TargetNo />**<No of seconds to ring, max 30 />#<SEND /> Activate Call Divert when not answered, set numbers of seconds to ring
*#61#<SEND /> Status of Call divert when not answered
##61#<SEND /> Cancel Call divert when not answered
**62*<TargetNo /><SEND /> Call Divert when not reachable
*#62#<SEND /> Status of Call Divert when not reachable
##62#<SEND /> Cancel Call Divert when not reachable
**67*<TargetNo />#<SEND /> Call Divert when busy
*#67#<SEND /> Call Divert when busy
##67#<SEND /> Cancel Call Divert when busy
See the full list here. Your mileage my vary!


Web page security

There is a nice real word "tutorial" available how NOT to secure your websites. On this site you can test your skills of breaking into some access restricted web page. There are 16 levels to master, the final one is against the Apache authentication (guess how far I got ).
Good fun and an eye opener for fans of security by obscurity.


Gong Xi Fa Cai --- and fried fish Haka style


A 106 year old Lady with a good sense of humour
Happy new year to all of you. Shall the Year of the Monkey be the end to all monkey business to all of us!
The picture above shows my wife and her grandmother. The lady is 106 years old and has a good sense of humour. She actually contributed to one of the stories I use when teaching about software analysis and change management.
Since I'm very curious about cultures, the fact that I'm living in an intercultural relationship gives me ample leeway to explore similarities and differences. One day my wife was preparing fried fish Haka (the Chinese tribe she belongs too, the same like the founding father of modern Singapore) style.


Joel on Software Meeting


The online and real world start to intersect more and more. Seems like William Gibson was right. I hope that. I on the other hand have the feeling it is more like John Brunner.
Nevertheless, if you are around, join us for a Joel meeting on every 3rd Wednesday in a month:

find out more at


Develop your software in a team

I've done some review of a large Domino system today. There was a team of developers quarelling for how to do it and who has to do it. The result reminded me, that TEAM is an acronym in German and stands for "Toll Ein Anderer Machts" (means: Great somebody else is doing it). I was so impressed, I only could recommend this product from this great company. (I admit, I HATE motivational posters.


Talk on “How Prototyping Helps to Develop Better Quality Software”

Back on stage... On 12 Nov. I will present to the Singapore Computer Society on “How Prototyping Helps to Develop Better Quality Software”. If you happen to be in Singapore, join us. Details can be found on the SCS Website


Ban The Reports!

I do a lot of requirement analysis and software specifications. It is exiting work because you never know what you will get in the end. Researching what is needed includes contextual enquiries, workplace artifacts, paper prototypes, focus group, competitor analysis etc. Since no system is an island any more users have certain expectations. One of it I find very funny/irritating/stupid/brainwashed/unreflected/conditioned* (*make your pick): "We want reports".  I usually tell users  in the new system: "Reports are forbidden!" After the panic attack settles I explain why. Don't get me wrong: Software that creates reports is not forbidden. I even made them part of my standard solution building blocks (I like Crystal Reports, it's siblings, Intelliprint (for Notes), PDFPump and ShowBusiness Cuber ). However I stand firm: Ban the Reports!


Outsourcing IT jobs.

There a big concerns about IT jobs leaving the US and heading to other countries, Luckily the worries can be buried. We in Asia can't beat the prices this US company can offer, also the don't have human working conditions.


This site is in no way affiliated, endorsed, sanctioned, supported, nor enlightened by Lotus Software nor IBM Corporation. I may be an employee, but the opinions, theories, facts, etc. presented here are my own and are in now way given in any official capacity. In short, these are my words and this is my site, not IBM's - and don't even begin to think otherwise. (Disclaimer shamelessly plugged from Rocky Oliver)
© 2003 - 2017 Stephan H. Wissel - some rights reserved as listed here: Creative Commons License
Unless otherwise labeled by its originating author, the content found on this site is made available under the terms of an Attribution/NonCommercial/ShareAlike Creative Commons License, with the exception that no rights are granted -- since they are not mine to grant -- in any logo, graphic design, trademarks or trade names of any type. Code samples and code downloads on this site are, unless otherwise labeled, made available under an Apache 2.0 license. Other license models are available on written request and written confirmation.