Search

Domino Upgrade

VersionSupport end
5.0
6.0
6.5
7.0
8.0
8.5
Upgrade to 9.x now!
(see the full Lotus lifcyle) To make your upgrade a success use the Upgrade Cheat Sheet.
Contemplating to replace Notes? You have to read this! (also available on Slideshare)

Languages

Other languages on request.

Twitter

Useful Tools

Get Firefox
Use OpenDNS
The support for Windows XP has come to an end . Time to consider an alternative to move on.

About Me

I am the "IBM Collaboration & Productivity Advisor" for IBM Asia Pacific. I'm based in Singapore.
Reach out to me via:
Follow notessensei on Twitter
(posts)
Skype
Sametime
IBM
Facebook
LinkedIn
XING
Amazon Store
Amazon Kindle
NotesSensei's Spreadshirt shop
profile for stwissel on Stack Exchange, a network of free, community-driven Q&A sites

21/01/2016

The quick and dirty Domino Cloudant export

Category
Moving data out of Domino never has been hard with all the APIs available. The challenge always has been: move them where? Ignoring for a second all security considerations, the challenge is to find a target structure that matches the Domino model. Neither flat table storage nor RDBMS fit that very well.
A close contender is MongoDB which is used in one compelling Notes retirement offering. However the closest match in concept and structure is Apache CouchDB, not surprisingly due to its heritage and origin.
It is maintained by a team led by the highly skilled Jan Lehnardt and of course there are differences to Notes.
But the fit is good enough. Using the lightweight Java library Ektorp exporting a set of documents from Notes to CouchDB is a breeze. The core class is a simple mapping of a Notes document to a JSON structure:
package com.notessensei.export;

import java.util.HashMap;
import java.util.Map;
import java.util.Vector;

import lotus.domino.Document;
import lotus.domino.Item;
import lotus.domino.NotesException;

public class NotesJsonDoc {
	public static String ID_FIELD = "_id";
	public static String REV_FIELD = "_rev";
	
	private Map<String, String> content = new HashMap<String, String>();
	
	@SuppressWarnings("rawtypes")
	public NotesJsonDoc(Document source) throws NotesException {
		Vector allItems = source.getItems();
		for (Object itemObject : allItems) {
			Item item = (Item) itemObject;
			this.content.put(item.getName(), item.getText());
		}
		this.content.put(ID_FIELD, source.getUniversalID());
	}
	
	public Map<String, String> getContent() {
		return this.content;
	}
	
	public NotesJsonDoc setRevision(String revision) {
		this.content.put(REV_FIELD, revision);
		return this;
	}
	
	public String getId() {
		return this.content.get(ID_FIELD);
	}
}

08/01/2016

Mess with the Bluemix Colors

Category
The Bluemix designers consider their color scheme: robust, decent and unobstrusive. However not everybody likes the dark colors (some do). Stylish to the rescue. It comes in Firefox and Chrome flavours. It requires a custom style sheet and it might take you a while to figure things out. So use this for starters:
 .dashboardArtifactCreationSection .tile,
 .cloudOESpaceLabel,
 .tile-container .tile,
 .bluemix-global-header,
 .bluemix-global-header .bluemix-nav-list,
 .NavTree2,
 .cloudOEActionBarDockedNavArea,
 .cloudOEActionBarSelector,
 .cloudOEAppDetails .cloudOEActionBarDockedNavArea,
 .inner,
 .cloudOEAppDetails .cloudOEActionBarNavigationTreeNode,
 .d-category-section .category-header,
 .dijitInputContainer,
 .cloudOEActionBarContentArea .cloudOEFilterBar .cloudOESearchBox .dijitInputField,
 .cloudOEActionBarContentArea .cloudOEFilterBar .cloudOESearchBox input,
 .cloudOEActionBarContentArea .cloudOEFilterBar .cloudOESearchBox .dijitTextBox,
 .d-docked-nav-area .d-nav-container,
 .nav-category,
 .cloudOEAppActivity,
 .appDetailsOverview_Health .tile-segmented,
 .appDetailsOverview_EstimateCost {
    background-color: #3366ff;
    color: #fff;
}
  body {
  background-color: white;
  }
  header {
    background-color: #6677CC;
  }
  
  .cloudOEStoreFront,
  .catalog-message-pane,
  .catalog-container {
    background-color:  #65C4FF;
    color: black;
  }
  
   .cloudOEDockedOpenNav {
    background-color:  #4dc4ff;
     color: black;
   }
Now go and pick nice colors.
As usual YMMV

24/12/2015

Mail archive with Apache CouchDB / IBM Cloudant - Part 1

Category  
Like it or not, your eMail turned into the archive of your (working) past. One of the challenges with this archive is the tendency to switch eMail systems from time to time. IBM Notes won't open your Outlook PSD file, nor would Outlook open your Notes NSF database.
So a vendor and format neutral solution is required. The obvious choice here is MIME, which is for one, the format any message crossing the internet is encoded in, secondly all eMail applications support MIME - to some extend. Just storing each message into a directory structure isn't a good solution either, since navigation and search leave much to be desired, so some more work is needed.
Of course open standards tend to be ambiguous enough to allow different interpretation or the implementation of propriety extension. MIME is no exception. You can send any type of attachments, including malicious payloads, which are encoded and outside the MIME standard.
So looking at an archival solution here is my list of requirements:
  • Needs to be able to store MIME messages
  • Mime headers an other id fields need to be captured in database fields
  • Need to be able to sync on different locations for backup/availability
  • Need to be able to provide navigation access by sorted, filtered lists
  • Interface to do some analytics
  • Full text search
  • HTML and text content should be displayed directly, all other types should be listed as attachments
  • Inline images (href / src in the html content pointing to other mime parts) need to be dealt with
  • Import capabilities
  • Source code for inspection available, OpenSource if possible
Looking at the requirement, I concluded: I got a clear idea what I want to have, but I haven't found it. The logical next step: Let's build it.

26/11/2015

Automated Tests in Bluemix Build and Deploy

Category
Bluemix is a beautiful environment for agile software development. Its build and deploy capability ensure continious delivery, so you can focus on code. A well run project requires automatic testing (starting with unit tests up to integration testing).
You can configure this in the Build and Deploy pipeline, so your project looks like this:
A fully working pipeline
While you could argue: "I run my tests local", you might encounter the situation where you use the online editor and you then depend on the tests running in Bluemix. Setting up build and deploy is well documented and straight forward.

Tip: For Java projects you should use Maven or Gradle build, so your library dependencies are properly resolved. For Node.js projects a "simple" build would suffice, however using npm install would make the stage fail if your package.json has an issue, so you don't run later stages and fail there

However the documentation for the test stage simply states: "If you want to require that certain conditions are met, include test jobs before or after your build and deploy jobs. Test jobs are highly customizable. For example, you might run tests on your project code and a deployed instance of your app. ". That's a little "thin".
Inspecting the test job screen itself, we can learn that we have different testing options
Choices for testing
I specifically like the Sauce labs integration and the ability to run a code, security and vulnerability scan (So a real pipeline might have up to 4 distinct test stages). However the screen for "simple" tests, where unit tests go, isn't particularily helpful:
The helpful test screen
So lets shed some light on the inner workings.

25/08/2015

There's a plug-in for that! Getting started with Cloud Foundry plug-ins

Category
Since IBM Bluemix is build on top of Cloud Foundry, all the knowhow and tooling available for the later can be used in Bluemix too.
One of the tools I'm fond of is the Cloud Foundry command line cf. The tool is a thin veneer over the Cloud Foundry REST API and is written in GO. "Thin veneer" is a slight understatement, since the cf command line is powerful, convenient and - icing on the cake - extensible.
A list of current plug-ins can be found in the CF Plug-in directory. Now the installation instructions haven't kept up with the cf releases, so here are the steps you need:
  1. Head to the CF Command Line release page and make sure you have the latest release installed.
    At time of this writing that would be 6.12.2
  2. Add the community repository to your installation using this command:
    cf add-plugin-repo CF-Community http://plugins.cloudfoundry.org/
    This command is only available in cf versions > 6.10. A lot of blog entries or even the github documentation suggest downloads or even golang installs. With the availability of the repository these steps are no longer necessary (but you are free to use them)
  3. Now listing all available plug-ins is as simple as cf repo-plugins
  4. To install a specific plug-in you issue the command:
    cf install-plugin '[name of plugin as listed]' -r CF-Community
    If the name doesn't contain white space, the quotes can be omitted
  5. After installation cf help provides the short instructions on how to use the modules
The interesting question now is, what are the plug-ins worthwhile looking at.

24/08/2015

Let there be a light - Angular, nodeRED and Websockets

Category   
NodeRED has conquered a place in my permanent toolbox. I run an instance in Bluemix, on my local machine and on a Raspberry PI. I build a little demo where a light connected to a Particle lights up based on an event reaching a NodeRED instance. However I don't carry my IoT gear every time (got lots of funny looks at airport security for it), but I still want to demo the app. The NodeRED side is easy. I just added a websocket output node and the server side is ready to roll.
Web socket in NodeRED
On the browser side I decided to use angular.js and one of its web socket libraries ng-websocket. The application code is just about 50 lines, so here it goes:
'use strict';

var websocketEndpoint = 'wss://'+window.location.hostname+'/ws/bulb';

console.log('Application loading ...');
// Declare app level module which depends on views, and components
var myApp = angular.module('myApp', ['ngWebsocket','ngRoute']);


myApp.config(['$routeProvider', function($routeProvider) {
    console.log('Routes loading... ');
    $routeProvider.when('/bulbon', {
        templateUrl: 'bulbs/bulb-on.html'
    }).when('/bulboff', {
        templateUrl: 'bulbs/bulb-off.html'
    }).when('/bulbunknown', {
        templateUrl: 'bulbs/bulb-unknown.html'
    }).otherwise({redirectTo: '/bulbunknown'});
}]);

myApp.run(function ($websocket, $location) {
    console.log('run');
    var ws = $websocket.$new({
        url: websocketEndpoint,
        reconnect: true
    }); // instance of ngWebsocket, handled by $websocket service

    ws.$on('$open', function () {
        console.log('Websocket connection open');
    });

    ws.$on('$message', function (data) {
        console.log('data arrived');
        console.log(data);
        var newlocation = '#/bulbunknown';
        if (data.bulb === 1) {
            newlocation = '#/bulbon';
        } else if (data.bulb === 0) {
            newlocation = '#/bulboff';
        }

        window.location = newlocation;
    });

    ws.$on('$close', function () {
        console.log('Websocket connection closed');
    });
});

console.log('Done');

The HTML is simple. I split it into the main file and 3 status files. One could easily put the statuses into a script template section or inside the app.

24/08/2015

Identiy in the age of cloud

Category  
Who is using your application and what they can do has evolved over the years. With cloud computing and the distribution of identity sources, the next iteration of this evolution is pending. A little look into the history:
  • In the beginning was the user table (or flatfile) with a column for user name and (initially unencrypted) password. The world was good, of course only until you had enough applications and users complaint bitterly
  • In the next iteration access to the network or intranet was moved to a central directory. A lot of authentication became: if (s)he can get to the server, we pick up the user name
  • Other applications started to use that central directory to look up users, either via LDAP or propriety protocols. So while the password was the same, users entered it repeatedly
  • Along came a flood of single sign-on frameworks, that mitigated identity information between applications, based on a central directory. Some used an adapter approach (like Tivoli Access Manager), others a standardised protocol like LTPA or SPNEGO
All of these have a few concepts in common:
  • All solutions are based on a carefully guarded and maintained central directory (or more than one). These directories are under full corporate control and usually defended against any attempt to alter or extend the functionality. (Some of them can't roll back schema changes and are build on fragile storage)
  • Besides user identity they provide groups and sometimes roles. Often these roles are limited to directory capabilities (e.g. admin, printer-user) and applications are left to their own devices to map users/groups to roles (e.g. the ACL in IBM Notes)
  • The core directory is limited to company employees and is often synchronised with a HR directory. Strategies for inclusion of customers, partners and suppliers tend to be point solutions
In a cloud world this needs to be re-evaluated. The first stumbling block are multiple directories that are no longer under corporate control. Customer came to expect functionality like "Login with Facebook" and security experts are fond of two factor authentication - something the LDAP protocol has no provision for. So a modern picture looks more like this:
Who and What come from different sources <

Core tenants of Identity

  • An Identity Provider (IdP) is responsible to establish identity. The current standard for that is SAML, but that's not the only way. The permission delegation known as OAuth, to the confusion of many architects, can be used as authentication substitute (I allow you to read from my LinkedIn profile my name and email via OAuth and you then take that values as my identity). In any case, the cloud applications shouldn't care, they just ask the respective service "Who is this person?". Since the sources can be very different, that's all he SSO will (and shall) provide
  • The next level is the basic access control. I would place that responsibility on the router, but depending on cloud capability each application needs to check itself. Depending on the security need an application might show a home page with information how to request access or deflect to an error (eventually hide behind a 404). In a micro service world an application access service could provide this information. A web UI could also use that service to render a list of available applications for the current user
  • It is getting more interesting now. In classical application any action (including reading and rendering data) is linked to a user role or permission. These permissions live in code. In a cloud micro service architecture the right to action is better delegated to a rule service. This allows for more flexibility and user control. To obtain a permission a context object is handed to the rule service (XML or JSON), that in its bare minimum contains just the user identity. In more complex cases it could be value, project name, decision time etc. The rule engine then approves or denies the action
  • A secondary rule service can go an lookup process information (e.g. who is the approver for a 5M loan to ACME Inc). Extracting the rules from code into a Rule Engine makes it more flexible and business accessible (You want to be good at caching)
  • The rule engine or other services use a profile service to lookup user properties. This is where group memberships and roles live.
    One could succumb to the temptation to let that service depend on an LDAP corporate directory, only to realise that the guardians will lock him down (again). The better approach is the profile service synchronises properties that are maintained in other systems and presents the union of them to requesting applications
  • So a key difference between the IdP and the profile service: the IdP has one task and one only, it performs that through lookups. The profile service provides profile information (in different combinations) that it obtained through direct entry or synchronisation
  • The diagram is a high level one, the rule engine and profile service might themselves be composed of multiple micro services. This is beyond this article
Another way to look at it is the distinction between Who & What
The who and what of identity
As usual YMMV

12/07/2015

Validating JSON object

Category  
One of the nice tools for rapid application development in Bluemix is Node-RED which escaped from IBM research. One passes a msg JSON object between nodes that process (mostly) the msg.payload property. A feature I like a lot is the ability to use a http input node that can listen to a POST on an URL and automatically translates the posted form into a JSON object.
The conversion runs non-discriminatory, so any field that is added to the form will end up in the JSON object.
In a real world application that's not a good idea, an object shouldn't have unexpected properties. I had asked before, so it wasn't too hard to derive a function I could use in Node-RED:
Cleaning up an incoming object - properties
this.deepclean = function(template, candidate, hasBeenCleaned) {
			var cleandit = false;
			
			for (var prop in candidate) {
				
				if (template.hasOwnProperty(prop)) {
					// We need to check strict clean and recursion
					var tProp = template[prop];
					var cProp = candidate[prop];
					
					// Case 1: strict checking and types are different
					if (this.strictclean && ((typeof tProp) !== (typeof cProp))) {
						delete candidate[prop];
						cleandit = true;
						
					// Case 2: both are objects - recursion needed	
					} else if (((typeof tProp) === "object") && ((typeof cProp) === "object")) {
						cleandit = node.deepclean(tProp, cProp, (hasBeenCleaned || cleandit));
						candidate[prop] = cProp;
					}
				
				// Case 3: property is not there	
				} else {
					delete candidate[prop];
					cleandit = true;
				}
			}
			
			return (hasBeenCleaned || cleandit);			
		}
The function is called with the template object and the incoming object and the initial parameter false. While the function could be easily used inside a function node, the better option is to wrap it into a node of its own, so it is easy to use anywhere. The details how to do that can be found on the Node-RED website. The easiest way to try the function: add your Node-RED project to version control, download the object cleaner node and unzip it into the nodes directory. Works in Bluemix and in a local Node-RED installation.

29/06/2015

Random insights in Bluemix development (a.k.a Die Leiden des Jungen W)

Category
Each platform comes with it's own little challenges, things that work differently than you expect. Those little things can easily steal a few hours. This post collects some of my random insights:
  • Development cycle

    I'm a big fan of offline development. My preferred way is to use a local git repository and push my code to Bluemix DevOps service to handle compilation and deployment. It comes with a few caveats
    • When you do anything beyond basic Java, you want to use Apache Maven. The dependency management is worth the learning curve. If you started with the Java boilerplate, you end up with an ANT project. Take some time, to not only mavenize it, but adjust the directories to follow the maven standards. This involves shuffling a few files around (/src vs. /src/main/java and /bin vs. /target/main/java for starters) and edit the pom.xml to remove the custom path
    • Make sure you clear out the path in the build job on Devops, maven already deploys to target. If you have specified target in Devops, you end with the code in target/target and the deploy task won't find anything
    • Learn about the liberty profile and its available feature, so you can properly specify <scope>provided</scope> in the POM.xml
    • In node.js, when you manually install a module in node_modules, that isn't pulled from a repository through an entry in package.json, that module will not be visible to standard build and deploy, since (surprise surprise) node_modules are excluded from version control and build checkout.
      Now there are a bunch of workarounds described, but I'll sum it up: don't bother. Either you move your module into a repository DevOps can reach or you build the application locally and use cf push
    • manifest.yml is your friend. Learn about it. Especially the path command. When deploying a maven build your path will be /target/[name-of-app]-[maven-version].war
    • You can specify a buildpack and environment parameters in a manifest. Works like a charm. However removing them from the manifest has no effect. You have to manually unset the values using the cf tool. Also the buildpack needs to be reset manually, so be careful there!
  • Services

    The automagical configuration of services is one of the things to love in Bluemix. This especially holds true for Java
    • The samples suggest to use the VCAP_SERVICES environment variable to get credentials and urls for your services. In short: don't. The Java Liberty build pack does a nice job making the values available though JNDI or Spring. So simply use those. To make sure that Java:comp/env can see them properly, don't forget to reference them in web.xml
    • In diversion from this: I found the mqLite Java classes less stressful that configuring JMS via JNDI. The developers did a good job making that library too work automagical on Bluemix.
    • For some services (e.g. JAX-RS 2.0 client; BlueMix SSO) you do have to touch the server.xml.
      The two methods are a packaged server or a server directory. The former requires a local liberty profile installed, so I prefer the later. It is actually easier than it sounds. In your (Maven) project, you create new directories DefaultServer and DefaultServer/apps (case sensitive!). You create/edit the server.xml in the DefaultServer directory. Then check for your maven plugin in pom.xml and change the output directory (in bold):
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-war-plugin</artifactId>
        <version>2.3</version>
        <configuration>
          <failOnMissingWebXml>false</failOnMissingWebXml>
          <warName>${artifactId}</warName>
          <outputDirectory>${basedir}/defaultServer/apps</outputDirectory>
        </configuration>
      </plugin>
      

      Then you can deploy your application using mvn install and cf push [appname] -p defaultServer. These two commands work in DevOps too!
    • The SSO service is "Single Sign On", there is no real "Single Sign Out". That's not an issue specific to Bluemix, but something all SSO solutions struggle with, just to be clear what to expect. The login dialog is ugly, but fully customizable. The nature of SSO (Corporate and/or a public provider) makes it a minimal provider: identity only, no roles, attributes or groups. In the spirit of micro services: build a rest based service for that
  • Node-RED

    While it is advertised as IoT tool, there is much more to this little gem
    • Node-RED runs on Bluemix, your local PC or even a Rasberry Pi. For the later head over to The Thingbox to get your ready OS image
    • Node-RED can be easily expanded, there are tons of ready modules at Node-RED flows. Not all are suitable for Bluemix (e.g. the ones talking Bluetooth), but a local Node-RED can easily talk to a Bluemix Node-RED making it easy for applications to run distributed
    • My little favourite: connect a HTTP post input directly to a Cloudant output. Node-RED converts the encoded form into a JSON object you can drop into the database as is. You might want to add a small filter (a compute node) to avoid data contamination
As usual YMMV

18/06/2015

Investigating JNDI

Category  
When developing Java, locally or for Bluemix a best practise is to use JNDI to access resources and services you use. In Cloud Foundry all services are listed in the VCAP_SERVICES environment variable and could be parsed as JSON string. However this would make the application platform dependent, which is something you want to avoid.
Typically a JNDI service requires to edit the server.xml to point to the right service. However editing the server.xml in Bluemix is something you do want to avoid as much as possible. Luckily the Websphere Java Liberty Buildpack, which is the one Bluemix uses for Java by default, does handle that for you automagic and all Bluemix services turn into discoverable JNDI objects. So far in theory. I found myself in the tricky situation to check what services are actually there. So I wrote some code that turns the available JNDI objects into a JSON string.
    @GET
    @Path("/jndi")
    @Produces(MediaType.APPLICATION_JSON)
    public Response getJndi() {
        StringBuilder b = new StringBuilder();
        b.append("{ \"java:comp\" : [");
        this.renderJndi("java:comp", b);
        b.append("]}");

        return Response.status(Status.OK).entity(b.toString()).build();
    }

    private void renderJndi(String prefix, StringBuilder b) {
        boolean isFirst = true;

        try {
            InitialContext ic = new InitialContext();
            NamingEnumerationlt;NameClassPairgt; list = ic.list(prefix);
            while (list.hasMore()) {
                if (!isFirst) {
                    b.append(", \n");
                }

                NameClassPair ncp = list.next();
                String theName = ncp.getName();
                String className = ncp.getClassName();

                b.append("{\"name\" : \"");
                b.append(theName);
                b.append("\",");
                b.append("\"javaClass\" : \"");

                b.append(className);
                b.append("\"");
                if ("javax.naming.Context".equals(className)) {
                    b.append(", \"children\" : [");
                    this.renderJndi(prefix + (prefix.endsWith(":") ? "" : "/") + theName, b);
                    b.append("]");
                }
                b.append("}");
                isFirst = false;
            }
        } catch (Exception e) {
            e.printStackTrace();
            b.append("\"");
            b.append(e.getMessage());
            b.append("\"");
        }

    }

Enjoy - As usual you YMMV

02/06/2015

Adventures with Node-RED

Category
Node-RED is a project that succesfully escaped "ET" - not the alien but IBM's Emerging Technology group. Build on top of node.js, Node-RED runs in many places, including the Rasberry PI and IBM Bluemix.
In Node-RED the flow between nodes is graphically represented by lines you drag between them, requiring just a little scripting to get them going.
The interesting part are the nodes that are available (unless you fancy to write your own): A large array of ready made flows with nodes and sample applications makes Node-RED extremly flexible (I wonder if it would make sense to build a workflow engine with it). In case you don't find a node you fancy, you can build your own. Not all nodes are created equal, so you need to check what works. When you run Node-RED on Bluemix, you won't get access to hardware like serial port or Bluetooth, but you gain a DNS addressable IP endpoint (you are not limited to http(s)). Furthermore, IBM provides direct access to the IBM IoT cloud, that takes the headache out of device configuration by providing an extensive library of device libraries.
So how to get additional nodes, own or others, onto Bluemix? Here are the steps:
  1. create a new application with the IoT Boilerplate
  2. link that application to version control on hub.jazz.net
  3. clone the repository locally git clone ...
  4. edit the package.json and add the item you would like to add
  5. commit and push the changes back to jazzhub and let "build and deploy" sort it out

20/05/2015

Your API needs a plan (a.k.a. API Management)

Category
You drank the API Economy cool aid and created some neat https addressable calls using Restify or JAX-RS. Digging deeper into the concept of micro services you realize, a https callable endpoint doesn't make it an API. There are a few more steps involved.
O'Reilly provides a nice summary in the book Building Microservices, so you might want to add that to your reading list. In a nutshell:
  • You need to document your APIs. The most popular tool here seems to be Swagger and WSDL 2.0 (I also like Apiary)
  • You need to manage who is calling your API. The established mechanism is to use API keys. Those need to be issued, managed and monitored
  • You need to manage when your API is called. Depending on the ability of your infrastructure (or your ability to pay for scale out) you need to limit the rate your API is called by second, hour or billing period
  • You need to manage how your API is called. In which sequence, is the call clean, where does it come from
  • You need to manage versions of your API, so innovations and improvements don't break existing code
  • You need to manage grouping of your endpoints into "packages" like: free API, fremium API, partner API, pro API etc. Since the calls will overlap, building code for the bundles would lead to duplicates
And of course, all of this need statistics and monitoring. Adding that to you code will create quite some overhead, so I would suggest: use a service for that.
In IBM Bluemix there is the API Management service. This service isn't a new invention, but the existing IBM Cloud API management made available in a consumption based pricing model.
Your first 5000 calls are free, as is your first developer account. After that is is less than 6USD (pricing as of May 2015) for 100,000 calls. This provides a low investment way to evaluate the power of IBM API Management.
API Management IBM Style
The diagram shows the general structure. Your APIs only need to talk to the IBM cloud, removing the headache of security, packet monitoring etc.
Once you build your API you then expose it back to Bluemix as a custom service. It will appear like any other service in your catalogue. The purpose of this is to make it simple using those APIs from Bluemix - you just read your VCAP_SERVICES.
But you are not limited to use these APIs from Bluemix. You can call the IBM API management directly (your API partners/customers will like that) from whatever has access to the Intertubes.
There are excellent resources published to get you started. Now that you know why, check out the how: If you not sure about that whole micro services thing, check out Chris' example code.
As usual YMMV

03/03/2015

Develop local, deploy (cloud) global - Java and CouchDB

Category   
Leaving the cosy world of Domino Designer behind, venturing into IBM Bluemix, Java and Cloudant, I'm challenged with a new set of task to master. Spoiled by Notes where Ctrl+O gives you instant access to any application, regardless of being stored locally or on a server I struggled a little with my usual practise of

develop local, deploy (Bluemix) global

The task at hand is to develop a Java Liberty based application, that uses CouchDB/Cloudant as its NoSQL data store. I want to be able to develop/test the application while being completely offline and deploy it to Bluemix. I don't want any code to have conditions offline/online, but rather use configuration of the runtimes for it.
Luckily I have access to really smart developers (thx Sai), so I succeeded.
This is what I found out, I needed to do. The list serves as reference for myself and others living in a latency/bandwidth challenged environment.
  1. Read: There are a number of articles around, that contain bits and pieces of the information required. In no specific order:
  2. Install: This is a big jump forward. No more looking for older versions, but rather bleeding edge. Tools of the trade:
    • GIT. When you are on Windows or Mac, try the nice GUI of SourceTree, and don't forget to learn git-flow (best explained here)
    • A current version of the Eclipse IDE (Luna at the time of writing, the Java edition suffices)
    • The liberty profile beta. The Beta is necessary, since it contains some of the features, notably couchdb, which are available in Bluemix by default. Use the option to drag the link onto your running Eclipse client
    • Maven - the Java way to resolve dependencies (guess where bower and npm got their ideas from)
    • CURL (that's my little command line ninja stuff, you can get away without it)
    • Apache CouchDB
  3. Configure: Java loves indirection. So there are a few moving parts as well (details below)
    • The Cloudant service in Bluemix
    • The JNDI name in the web.xml. Bluemix will discover the Cloudant service and create the matching entries in the server.xml automagically
    • A local profile for a server running the Liberty 9.0 profile
    • The configuration for the local CouchDB in the local server.xml
    • Replication between your local CouchDB instance and the Cloudant server database (if you want to keep the data in sync)
The flow of the data access looks like this
Develop local, deploy global

Disclaimer

This site is in no way affiliated, endorsed, sanctioned, supported, nor enlightened by Lotus Software nor IBM Corporation. I may be an employee, but the opinions, theories, facts, etc. presented here are my own and are in now way given in any official capacity. In short, these are my words and this is my site, not IBM's - and don't even begin to think otherwise. (Disclaimer shamelessly plugged from Rocky Oliver)
© 2003 - 2017 Stephan H. Wissel - some rights reserved as listed here: Creative Commons License
Unless otherwise labeled by its originating author, the content found on this site is made available under the terms of an Attribution/NonCommercial/ShareAlike Creative Commons License, with the exception that no rights are granted -- since they are not mine to grant -- in any logo, graphic design, trademarks or trade names of any type. Code samples and code downloads on this site are, unless otherwise labeled, made available under an Apache 2.0 license. Other license models are available on written request and written confirmation.