Blog

Lotus Notes got dumped because employees didn’t have smartphones?

In the wake of yesterday's blog posting on Microsoft licensing BinaryTree for Office 365 migrations I was sent a link from a few weeks ago.  Bang & Olufsen were not quiet about their move from Lotus Notes to Office 365.  This ComputerWekly article did bring out a bizarre reason though:
The company had issues with accessibility with its previous system. About 75% of employees work remotely but, Lotus Notes was only accessible to the 150 employees who had company-provided smartphones. The other 1,350 had no access when working remotely.



So the lack of knowledge about:
  • Lotus Traveler
  • BYOD support
  • managed replicas
stop you and somehow force you into migrating to a whole new platform?

Straight from Ole Damsgaard, senior director of IT & shared service centre at Bang & Olufsen he states:
“It was easy to integrate Office 365 into our existing work environment and for our employees to start using it right away because they already know tools like Outlook and other Microsoft products,” he said.

How can it be easy to integrate something that was not in use at your organization? How are employees using Outlook when they use Lotus Notes daily (since you are migrating them)?

It sounds more like IBM lost in trying to sell them on hosted offerings compared to Microsoft. Thoughts?
    for this posting

    On Wednesday, September 12th, 2012   by Chris Miller        

I was following Tom Duff’s post (and comments) on a Ray Ozzie post for other reasons

Instead of linking to Ray I will just link to Tom here.  But I did grab this topic I wanted to cover from the exact posting Tom was talking about.
Notes had just about the simplest possible replication mechanism imaginable.  After all, we built it at Iris in 1985 for use on a 6Mhz 286-based IBM PC/AT with incredibly slow-seeking 20MB drives.  We were struggling with LIM EMS trying to make effective use of more than 1MB of memory.  Everything about the design was about implementation simplicity and efficiency.

Besides understanding what Tom was saying about not being able to actively comment back since he is saying he has discussions (which I personally take to mean with MS people as I grabbed maybe 6 or 7 links and saw no responses from Ray), I did find the idea intriguing.

One trackback posting made a quite simple and decent comparison of the previous Pull technologies of RSS with the proposed Pull Pull of SSE.  But the initial spec has nothing noted about security or master sources yet.  But, my thought here is that it will grow into that with Ray having input and his above statement about Notes.  With the moves into XML throughout Microsoft products, enabling SSE ability is the first move into having replication in their technologies over another standard.  Instead of the proprietary Domino replication abilities.  The security and authorization has a long way to go yet, have no fear.

If we take this like school, Ray is trying to develop a new learning program on new standards and Lotus has had an established college for 20 years that has grown around some very basic roots of security, portability and simplified scalability.

The point of this posting is not how Lotus does the replication, but the far reaching capabilities it has after years of growth and enhancements.  Then Ray floats an idea to base some Microsoft work on emerging specs and the slower flocks will follow far too soon.  Take that last part and let it marinade some.
    for this posting

    On Tuesday, November 29th, 2005   by Chris Miller        

Replication Topology 205 - Tiered (Binary Tree)

Prerequisite: You must have completed Replication Topology 101 - the basics (which was done in July) and 102 and 103 and 104 now also before completing this course. :-)  I know I have taken my time on these but I find too many other things come up, plus it keeps you waiting for the next one.

Welcome to the graduate level course material (as
Tom Duff said it should be) !!!!

HOMEWORK:  From now on you are required to draw out the topology for your environment at each level.  Even if you are doing it for future planning or hypothetical looks.  This is a learning experience folks.

Here is what I said about Tiered (Binary Tree) topology:
Taking the hub and spoke idea a bit different, a central servers updates two or a few servers.  Those servers update two or more each and so on down the pyramid.  This works well if you have some good network connections to a few servers and then those have some decent speed to downstream servers without the top having that speed access.  Otherwise you could go back to hub and spoke.  The downside is that in a large tiered environment, it can take some time for a change to go up and down the tree if they do not share a parent server all the way to the top.  I have seen some tiers that cross somewhere in the middle to alleviate that and leave the top server for administration and NAB master


The Good:

A well thought out tree keep the data flowing; makes it locally available and with multiple tiers it can move between localities even if the connection is down to the main servers.  This is a great solution for multi-continent deployments or in countries that have Internet connectivity issues to the outside world.  Imagine a tier in America, Europe and Australia.  All the top level servers from each country then tier up to one other server in China.  If the link to China goes down, each country will still have the updates from all sites within itself.  Later, the rest of the world will catch up.


This idea also gets around timezone difficulties.  Data is most important to other sites in your timezone (in most instances, yes there are some corporate apps that rely on HQ but that is a side class).  So moving it between multiple cities to the top tier in that country keeps people happy.  You could some more tier levels into the mix, but for homework, draw one out for your company, no matter how big.

The Bad:

I said it best in the outline from the very start.  You can spend an enormous amount of time if you build the pyramid too large.  Imagine how it was done in ancient times.  One large stone was carried from the bottom to the top, very slowly.  You knew it was coming, you could see it in the distance down the great pyramid, but it took forever to get to the top so you could build on it.  Then the call has to go all the way back down the other side to let them know it was there.  Companies try to get around this by speeding up the cycle time in between each hop.  However, your schedule could become faster than the replication time of the data and you start to miss things until it can catch up.  I recently saw this with a DMZ at one corner of the pyramid.  During the day it was trying to keep up the fast 7 minute cycle that was set.  However, they noticed some data not showing for 2 plus hours.  Looking in the logs we saw that it was never finishing at the time the most data was being updated all over.  Then when the day slowed down or at night, it could easily catch up.  This also had to do with bandwidth being utilized, but it all adds to the issue.

We had strike #3 already, I guess this is the start of out #2 in the graduate level class.

    for this posting

    On Wednesday, November 2nd, 2005   by Chris Miller        

Replication Topology 104 - Ring

Prerequisite: You must have completed Replication Topology 101 - the basics (which was done in July) and 102 and 103 now also before completing this course. :-)  I know I have taken my time on these but I find too many other things come up, plus it keeps you waiting for the next one.

Here is what I said about the Ring topology:
Simple enough, servers call each other in a circle updating, adding and deleting as it goes.  The downside relies on a large ring where it can take some time to get all the way around.  Also, if one server in the ring goes down, so goes the cycle

As we do in each entry, we do a good side/bad side of this decision.  Then it is up to you to compare and modify yours as we go through each one from class 101

The good: This one is easy to implement and reduces the amount of connection records necessary.  Since you only need one record for the next hop, you only have as many replication connection docs as you do servers.  Simplicity.  If data does not need to be in total synchronization all the time, this is a beginner cycle that small companies embark on sometimes.  But what happens when you get too big or need that data more in time?  Reducing the cycle time is great, but you are still playing with fire.

The bad: It turns ugly here.  Timeliness is the issue on the data.  Multiple people possibly working on the same document across multiple servers comes into play for replication and save conflicts.  It is more than possible that a database that resides on servers, lets say in 5 offices, has customer information updates.  Then as it cycle around, who is right, what data is the newest?  Now we have phone calls and emails trying to figure that out.

The loss of a server in the cycle for whatever reason (crash, network outage, backups) causes only servers on the last hop before that server to be in sync.  The server before the dead one can never send the updates to the servers on the other side of the chasm.  Ouch!

Just picture when you were a kid and you played that game where you whisper in your neighbor's ear, then he passes it on to her and so on around the room until it gets back.  The outcome is normally a screwed up story.  Domino will keep that data in sync, no fear, but what happens when one of the kids leaves to go to the bathroom and never returns, think they will know where the story is tomorrow? Nope, becaused someone else changed it along the way.  Or worse yet, it just stops midway around and never gets back to you.

So strike #3 with this topology design
    for this posting

    On Monday, October 17th, 2005   by Chris Miller        

Replication Topology 103 - End to End

Prerequisites are Replication Topology 101 (the basics) and 102 (peer to peer) before you hit this one.

From the beginning, I gave a scenario of how it looks
Basically data starts on one end, passes through multiple servers through replication and then comes right back.  Timing becomes and issue to make sure that data can make it all the way down and back before the next baton is passed.  Think of it as runners that pass the baton, and if one runner takes off too early, who knows where the baton is.

So I hopefully already broke you away from the idea of a meshed environment in class 101 due to the sheer number of connection records that are possible and messy management.

End to end offers it's own set of benefits and pitfalls, of course.  If you can imagine your science class from way back in elementary school.....where they gave you a stack of batteries and a bunch of light bulbs.  You were then told to light them all up.  The first thought is batteries, then wire to next bulb, then wire to next bulb and so on until they were all connected.  Well if one went out in that serial connection idea, then everyone behind them went out.  So the teacher taught you about parallel connectivity to get around it.  Which end to end does not do in the true form.  Any variation moves it towards circular or even tiered architecture (with a bizarre slope).

The benefit is that data passes along in a cycle, reducing replication conflicts.  Save conflicts are entirely different as people across the string could be editing the exact document on every server.  Timing, as I mentioned, also becomes and issue since it could run any amount of time to get the data back and forth.  If a server or network is down, the others will replicate as scheduled, yet that missing link in the middle brings the idea of timeliness to a screeching halt on each end.

The end result is a long line of servers, spread in the same room or geographically, that have a start and end point.  Sure, you can argue that every topology has a start and end point.  But with the proper hub cluster setup, only an individual spoke failure would affect any users.  In end to end design, there are too many holes along the way.
    for this posting

    On Tuesday, August 16th, 2005   by Chris Miller        

More on Replication Topology Class 102

My friend Declan posted a response to how he handles some of the replication topology.  I thought it needed to be brought to the front for those that don't watch comments, plus he extended out some good points.
In Domino 6 and higher there is a database property called 'Update design on Adminp server only'.  This new settings means that when the design task runs you have more control over what server the design gets updated on.

I have no problem with the design task running on all my servers. It's a well controlled environment, developers don't have development rights on the production domain and all new apps must be template based so I can put the template on the server, sign it and then create the new app. This makes upgrades a lot easier.

I like that new feature, works great for new applications rolling out.  But, it is hard coming in to older and large application (as in number of) environments to get this flag set on all the databases.  It also let's you localize some databases that do need to be on the hub and still let Designer run.

The topology and timing need to be watched during changes since they are now fully centralized.  If you have missing or slow replication, the new design notes could not make it out to the remote sites fast enough.  Say you and the hub are in LA and they are in Germany.  If you are doing midnight design changes with 120 minute replication cycles, then they will be well in the office by the time it rolls around.

You are dead on with developers not playing in the production sandbox at all.  That whole conversation falls more under a policy/strategy for development versus production environments.  A whole topic in itself.  Some say two domains, some say one that is strictly controlled and segregated.  So let's have that part shortly.

    for this posting

    On Friday, August 5th, 2005   by Chris Miller        

Reader question on replication topology class 102

I received an IdoNotes email about my posting from Monday Aug 1st.  The reader asks:
I wonder if you could write a few comment on what DB you recommend to replicate as default in a Hub Spoke. I think one as default should only replicate Names.nsf and Admin4.nsf and for names.nsf disable designer. Then each server will have the templates for the installed version and names.nsf will use that template. Designer runs at 1AM default. Maybe one can get some conflict when designer update names.nsf, if the templates is not same version.

Excellent question and there is numerous answers to it.  I heartily agree that names.nsf and admin4.nsf need to replicate (ever wonder why they didn't migrate it to just the name adminp.nsf?  just asking).  Keep in mind that depending on your directory structure (whole versus configuration directory) that immediate updates of it may not be necessary.  As silly as it sounds, a custom domcfg might make sense to replicate around if you use MSSO.  The Directory Catalog is necessary at some sites if you are trying to maintain what database is where around the spokes.

Let's look at what does not get replicated: log.nsf, certlog.nsf (why would you when the registrations are done on the hub right?), webadmin.nsf, resources.nsf and a few random others.  It gives you a basic idea though.

As for the templates, controlling them centrally on a hub with the designer task and then using replication to push out the changes makes tons of sense.  But leaves a hole for mail and non-central databases.  Many control pushing changes through the ACL but forget about that task as you elude to.  I like the idea of having the hub as the manager and everything else as editor in the proper layout.  Why would any other server make changes to the design?  I can think there are some 3rd party apps that will though.

I actually think some of the templates should be centrally pushed, but most admins do not remove the templates from the installation since some updates are required when upgrading/installing.  But in the long run, not having them on every server would guarantee consistency.  This comes down to a strategic and management issue in the long run.  One we can explore more if people want.

This topic was approached in the Ask the Experts session at AdvisorLive a few weeks ago.  I expressed my opinions on the Designer task then also.
    for this posting

    On Wednesday, August 3rd, 2005   by Chris Miller        

Replication Topology 102 - Peer to Peer (Meshed) exposed

Prerequisite: You must have completed Replication Topology 101 - the basics (which was done in July)  before completing this course.

Sorry for the delay, but other posts were taking precedence. So let's get right to it.

One of the dilemmas when building out the infrastructure is how to start the replication topology after you break away from just one server. Let us not debate why someone does not have a cluster, just live with the fact that plenty of sites out there still have a single server.  When there is two servers, it should be obvious.  One calls the other and it is done.  Add a third to the mix and decision making seems to evaporate faster than spilled drinks in Las Vegas right now.  For some reason, some admins find it necessary to create a replication connection from one to every other server over and over (Please note the spaghetti reference from class 101).  Instead of planning a hub architecture right from that point, the confusion begins.

The good part of this topology is that there is no dependence on a hub server in case of failure.  If you have 3 servers with all these connections, and one fails, 66% are then still in sync waiting for the third to come back on-line.  Awesome idea.  You do not eliminate everyone having current data with a failure.

Yet, most admins want the data to replicate every few minutes all day long.  Amazingly at the same exact start and end times with the same interval in each connection document.  This leads into two things:
  1. Large possibility of replication/save conflicts as data access and updates take place.  If this application needs that much replication, you can bet it is getting updated regularly and by numerous people.
  2. This is like the 1¢ slots, you play those, soon the 5¢, then 25¢, then 1$.  Soon you are betting large on the roulette table that you make document 1 get to server C cleanly and in some timely fashion.

SO what does all this get us.  Peer to peer almost works for two servers, yet calling each other back to back doesn't really make sense.  So start thinking about which should be the hub and plan accordingly.
    for this posting

    On Monday, August 1st, 2005   by Chris Miller        

Replication Topology 101 - the basics

Recently this has become a point of, well not frustration, but amazement.  I think I finally got ahold of the answer today though.  When admins are new in a small environment, they don't always get the training they need on how to grow the domain.  So they do what they know best, just go and make it work.  Unfortunately, once your domain starts growing too fast and large, the lack of the basic training becomes the Achilles heel.  So I took it upon myself to right the wrong by throwing this little primer out there.  Oh, there will be some to follow.  This is to get the feet wet of those that need it.

There are a few options of topology design when you have multiple servers in a Domino domain.  You can classify the architecture in a few different ways:
  • Hub & Spoke - A typical design where a central server pushes and controls changes to all the servers around it.  You update one central source and everyone gets happy eventually.  But, if there are too many spokes, you can have times where the hub cannot reach all the servers during a cycle.  So you moved to the next couple ways.  The other downside relies on one central server for all updates.  If the hub dies, so does the topology.
  • Multiple Hub & Spoke - Here there is more than one hub, possibly even in a cluster, that handles the updates to their own sets of spokes.  This allows redundancy for the centralized architecture and lets the servers make the rounds updating the spokes.  This works well in a good LAN speed environment.  The downside, not too many if the central hubs are in a cluster.  That way data can pass across spokes fairly quickly on opposite sides.  If there is no cluster, see above.
  • Tiered (Binary Tree) - Taking the hub and spoke idea a bit different, a central servers updates two or a few servers.  Those servers update two or more each and so on down the pyramid.  This works well if you have some good network connections to a few servers and then those have some decent speed to downstream servers without the top having that speed access.  Otherwise you could go back to hub and spoke.  The downside is that in a large tiered environment, it can take some time for a change to go up and down the tree if they do not share a parent server all the way to the top.  I have seen some tiers that cross somewhere in the middle to alleviate that and leave the top server for administration and NAB master.
  • Ring  - Simple enough, servers call each other in a circle updating, adding and deleting as it goes.  The downside relies on a large ring where it can take some time to get all the way around.  Also, if one server in the ring goes down, so goes the cycle.
  • End-to-End -  Basically data starts on one end, passes through multiple servers through replication and then comes right back.  Timing becomes and issue to make sure that data can make it all the way down and back before the next baton is passed.  Think of it as runners that pass the baton, and if one runner takes off too early, who knows where the baton is.
  • Meshed (or Peer-to-Peer) - This is basically random servers that call other random servers.  It is all made with some reason when laid out, but you are never quite sure how or when data is getting to somewhere else.  It just shows up.
  • Spaghetti - This is the last result and the most frustrating.  Admins just create connection records form one to all the others, over and over again.  For each server in the domain.  Replication conflicts occur, the servers have no idea who owns the database, and design changes fly everywhere.  I usually encounter this when doing audits of domains where they keep patching and adding band-aids instead of fixing the real issue.  No topology design.

So there we are.  We can now mentally picture multiple types of topology right?  But the path of decisions is yet to come.

    for this posting

    On Thursday, July 14th, 2005   by Chris Miller