Search

Domino Upgrade

VersionSupport end
5.0
6.0
6.5
7.0
8.0
8.5
Upgrade to 9.x now!
(see the full Lotus lifcyle) To make your upgrade a success use the Upgrade Cheat Sheet.
Contemplating to replace Notes? You have to read this! (also available on Slideshare)

Languages

Other languages on request.

Twitter

Useful Tools

Get Firefox
Use OpenDNS
The support for Windows XP has come to an end . Time to consider an alternative to move on.

About Me

I am the "IBM Collaboration & Productivity Advisor" for IBM Asia Pacific. I'm based in Singapore.
Reach out to me via:
Follow notessensei on Twitter
(posts)
Skype
Sametime
IBM
Facebook
LinkedIn
XING
Amazon Store
Amazon Kindle
NotesSensei's Spreadshirt shop
profile for stwissel on Stack Exchange, a network of free, community-driven Q&A sites

24/08/2015

Identiy in the age of cloud

Category  
Who is using your application and what they can do has evolved over the years. With cloud computing and the distribution of identity sources, the next iteration of this evolution is pending. A little look into the history:
  • In the beginning was the user table (or flatfile) with a column for user name and (initially unencrypted) password. The world was good, of course only until you had enough applications and users complaint bitterly
  • In the next iteration access to the network or intranet was moved to a central directory. A lot of authentication became: if (s)he can get to the server, we pick up the user name
  • Other applications started to use that central directory to look up users, either via LDAP or propriety protocols. So while the password was the same, users entered it repeatedly
  • Along came a flood of single sign-on frameworks, that mitigated identity information between applications, based on a central directory. Some used an adapter approach (like Tivoli Access Manager), others a standardised protocol like LTPA or SPNEGO
All of these have a few concepts in common:
  • All solutions are based on a carefully guarded and maintained central directory (or more than one). These directories are under full corporate control and usually defended against any attempt to alter or extend the functionality. (Some of them can't roll back schema changes and are build on fragile storage)
  • Besides user identity they provide groups and sometimes roles. Often these roles are limited to directory capabilities (e.g. admin, printer-user) and applications are left to their own devices to map users/groups to roles (e.g. the ACL in IBM Notes)
  • The core directory is limited to company employees and is often synchronised with a HR directory. Strategies for inclusion of customers, partners and suppliers tend to be point solutions
In a cloud world this needs to be re-evaluated. The first stumbling block are multiple directories that are no longer under corporate control. Customer came to expect functionality like "Login with Facebook" and security experts are fond of two factor authentication - something the LDAP protocol has no provision for. So a modern picture looks more like this:
Who and What come from different sources <

Core tenants of Identity

  • An Identity Provider (IdP) is responsible to establish identity. The current standard for that is SAML, but that's not the only way. The permission delegation known as OAuth, to the confusion of many architects, can be used as authentication substitute (I allow you to read from my LinkedIn profile my name and email via OAuth and you then take that values as my identity). In any case, the cloud applications shouldn't care, they just ask the respective service "Who is this person?". Since the sources can be very different, that's all he SSO will (and shall) provide
  • The next level is the basic access control. I would place that responsibility on the router, but depending on cloud capability each application needs to check itself. Depending on the security need an application might show a home page with information how to request access or deflect to an error (eventually hide behind a 404). In a micro service world an application access service could provide this information. A web UI could also use that service to render a list of available applications for the current user
  • It is getting more interesting now. In classical application any action (including reading and rendering data) is linked to a user role or permission. These permissions live in code. In a cloud micro service architecture the right to action is better delegated to a rule service. This allows for more flexibility and user control. To obtain a permission a context object is handed to the rule service (XML or JSON), that in its bare minimum contains just the user identity. In more complex cases it could be value, project name, decision time etc. The rule engine then approves or denies the action
  • A secondary rule service can go an lookup process information (e.g. who is the approver for a 5M loan to ACME Inc). Extracting the rules from code into a Rule Engine makes it more flexible and business accessible (You want to be good at caching)
  • The rule engine or other services use a profile service to lookup user properties. This is where group memberships and roles live.
    One could succumb to the temptation to let that service depend on an LDAP corporate directory, only to realise that the guardians will lock him down (again). The better approach is the profile service synchronises properties that are maintained in other systems and presents the union of them to requesting applications
  • So a key difference between the IdP and the profile service: the IdP has one task and one only, it performs that through lookups. The profile service provides profile information (in different combinations) that it obtained through direct entry or synchronisation
  • The diagram is a high level one, the rule engine and profile service might themselves be composed of multiple micro services. This is beyond this article
Another way to look at it is the distinction between Who & What
The who and what of identity
As usual YMMV

12/04/2015

Cloud with a chance of TAR balls (or: what is your exit strategy)

Category
Cloud computing is here to stay, since it does have many benefits. However even unions made "until death do us part" come with wagers these days. So it is prudent for your cloud strategy to contemplate an exit strategy.
Such a strategy depends on the flavour of cloud you have chosen (IaaS, PaaS, SaaS, BaaS) and might require to adjust the way you on-board in the first place. Let me shed some light on the options:

IaaS

When renting virtual machines from a book seller, a complete box from classic hosting provider or a mix of bare metal and virtual boxes from IBM, the machine part is easy: can you copy the VM image over the network (SSH, HTTPS, SFTP) to a new location? When you have a bare metal box, that won't work (there isn't a VM after all), so you need a classic "move everything inside" strategy.
If you drank the Docker cool aid, the task might be just be broken down into managable junks, thanks to the containers. Be aware: Docker welds you to a choice of host operating systems (and Windows isn't currently on the host list).
There are secondary considerations: how easy is it, to switch the value-added services like: DNS, CDN, Management console etc. on/off or to another vendor?

PaaS

Here you need to look separately at runtime and the services you use. Runtimes like Java, JavaScript, Phython or PHP tend to be offered by almost all vendors. dotNet and C# not so much. When your cloud platform vendor has embraced an open standard, it is most likely, that you can deploy your application code elsewhere too, including back into your own data center or a bunch of rented IaaS devices.
It get a little more complicated when you look at the services.
First look at persistence: is your data stored in a vendor propriety database? If yes, you probably can export it, but need to switch to a different database when switching cloud vendors. This means you need to alter your code and retest (but you do that with CI anyway?). So before your jump onto DocumentDB or DynamoDB (which run in a single vendor's PaaS only), you might want to checkout MongoDB, CouchDB (and its commercial siblings Cloudant or Couchbase) , Redis or OrientDB which run in multiple vendor environments.
The same applies to SQL databases and blob stores. This is not a recommendation for a specific technology (SQL vs. NoSQL or Vendor A vs. Vendor B), but an aspect you must consider in your cloud strategy.
The next check point are the services you use. Here you have to distinguish between common services, that are offered by multiple cloud vendors: DNS, auto scaling, messaging (MQ and eMail) etc. and services specific to one vendor (like IBM's Watson).
Taking a stand "If a service isn't offered by multiple vendors, we won't use it" can help you avoid a lock-in and will ensure that you stifle your innovation too. After all, you use a service, not for the sake of the service, but to solve a business problem and to innovate.
The more sensible approach would be to check if you can limit your exposure to a vendor to that special services only, should you decide to move on. This gives you the breathing space to then look for alternatives. Adding a market watch to see how alternatives might evolve improves your hedging.
Services are the "Damned if you do, damned if you don't" area of PaaS. All vendors scramble to provide top performance and availability for the common platform and distinction in the services on top of that.
After all one big plus of the PaaS environment are the services that enable "composable businesses" - and save you the headache to code them yourself. IMHO the best risk mitigation, and incidentally state of the art, is a sound API management a.k.a Microservices.
Once you are there, you will learn, that a classic Monolithic Architecture isn't cloud native (Those architectures survive inside of Virtual Machines) - but that's a story for another time.

SaaS

Here you deal with applications like IBM Connections Cloud S1, Google Apps for Work, Microsoft Office 365, Salesforce, SAP SaaS but also Slack, Basecamp,Github and gazillions more.
Some of them (e.g. eMail or documents) have open standard or industry dominating formats. Here you need to make sure, you get the data out in that format. I like the way Google is approaching this task. They offer Google Takeout, that tries to stick to standard formats and offers all data, any time for export.
Other have at least machine readable formats like CSV, JSON, XML. The nice challenge: getting data out is only half the task. Is your new destination capable of taking them back in?

BaaS

In a business process as a service (BaaS) the same considerations as the SaaS environment come to play: can I export data in a machine-readable, preferably industry standard format. E.g. you used a payroll service and want to bring it back inhouse or move to a different service provider. You need to make sure your master data can be exported and that you have the reports for historical records. When covered in reports, you might get away without transactional data. Typical formats are: CSV, JSON, XML

As you can see, not rocket science, but a lot to consider. For all options the same: do you have what it takes to move? Is there enough bandwidth (physical and mental) to pull it off? So don't get carried away with the wedding preparations and check your prenuptials.

17/04/2014

You want to move to Domino? You need a plan!

Category  
Cloud services are all en vogue, the hot kid on the block and irressitible. So you decided to move there, but you decided your luggage has to come along. And suddenly your realize, that flipping a switch won't do the trick. Now you need to listen to the expert.
The good folks at Amazon have compiled a table that gives you some idea how much it would take to transfer data:
Available Internet Connection Theoretical Min. Number of Days
to Transfer 1TB
at 80% Network Utilization
TB
T1 (1.544Mbps) 82 days
10Mbps 13 days
T3 (44.736Mbps) 3 days
100Mbps 1 to 2 days
1000Mbps Less than 1 day Some stuff for your math
(Reproduced without asking)
Talking to customers gung ho to move, I came across data volumes of 10-400 TB. Now go and check your pipe and do the math. A big bang, just flip the switch migration is out of the picture. You need a plan. Here is a cheat sheet to get you started:
  1. Create a database that contains all information of your existing users and how they will be once provisioned on Domino (If you are certified for IBM SmartCloud migrations IBM has one for you)
  2. Gather intelligence on data size and connection speed. Design your daily batch size accordingly
  3. Send a message to your existing users, where you collect their credentials securely and offer them a time slot for their migration. A good measure is to bundle your daily slots into bigger units of 3-7 days, so you have some wiggle room. Using some intelligent lookup you only present slots that have not been taken up
  4. Send a nice confirmation message with date and steps to be taken. Let the user know, that at cut-over day they can use the new mail system instantly, but it might take a while (replace "while" with "up to x hours" based on your measurements and the mail size intelligence you have gathered) before existing messages show up
  5. When the mailbox is due, send another message to let the user kick off the process (or confirm her consent that it kicks off). In that message it is a good idea to point to learning resources like the "what's new" summary or training videos or classes
  6. Once the migration is completed, send another message with some nice looking stats and thanking for your patience
  7. Communicate, Communicate, Communicate!
The checklist covers the user facing part of your migration. You still have to plan DNS cut-over, routing while moving, https access, redirection on mail links etc. Of course that list also applies for your pilot group/test run.
As usual: YMMV

Disclaimer

This site is in no way affiliated, endorsed, sanctioned, supported, nor enlightened by Lotus Software nor IBM Corporation. I may be an employee, but the opinions, theories, facts, etc. presented here are my own and are in now way given in any official capacity. In short, these are my words and this is my site, not IBM's - and don't even begin to think otherwise. (Disclaimer shamelessly plugged from Rocky Oliver)
© 2003 - 2017 Stephan H. Wissel - some rights reserved as listed here: Creative Commons License
Unless otherwise labeled by its originating author, the content found on this site is made available under the terms of an Attribution/NonCommercial/ShareAlike Creative Commons License, with the exception that no rights are granted -- since they are not mine to grant -- in any logo, graphic design, trademarks or trade names of any type. Code samples and code downloads on this site are, unless otherwise labeled, made available under an Apache 2.0 license. Other license models are available on written request and written confirmation.