Building in #MIM2016 Solution Resilience

My company UNIFY is well into its 12th year of existence (as is my tenure), and our “application-driven identity management” mantra has been a core principle in our solution approach from the beginning.  With the advent of cloud and so-called “hybrid” (off/on-premise) identity management I have not wavered from the belief that nothing changes in this regard.  That is despite the popular misconception from certain quarters that identity starts and ends with your on-premise directory.  Let’s just say your AD makes a lousy source of truth (SoT) for identity!

The benefits of aligning enterprise identity lifecycle to its HR platform are many, and at UNIFY we like to focus not just on the various sources of truth (e.g. students vs. staff in an education context), but on the events that should trigger change to an identity profile.  When it comes to harnessing these events, which may be in HR, the on-premise or cloud directory, or even in an LOB application such as a CRM, we are confident our Broker approach is second to none in driving timely identity synchronisation on platforms such as Microsoft Identity Manager (MIM) or any of its predecessors.  Our investment over more than a decade in a common application directory platform for driving IAM solutions is now paying dividends in the form of application sources such as HR systems being presented to an IAM platform such as MIM as an LDAP directory in its own right.

Yet behind the LDAP layer, not all HR systems are created equal … or rather, “some are more equal than others” … but more importantly, not all HR processes, be they BAU (business-as-usual) or EOM (end-of-month), are going enable the HR platform to lend itself to becoming a proxy identity source from the outset.  Put another way … it’s unlikely you will find any HR manager’s KPIs are related to driving an IAM solution.  Here are some possible questions that might come to mind for the HR system owner:

  • Did anyone ask me if I should allow my HR system to become the primary authoritative ‘source of truth’ for all enterprise directory employee user profiles?
  • Where are all my extra staff going to come from to allow me to meet these new SLAs?
  • Why didn’t anyone think to tell me that entering the backlog of new staff records the day before the monthly pay cycle isn’t going to cut it any more?
  • I wonder if anyone has thought about what should happen when a contractor (or a bunch of them) take on a permanent role?

What the HR manager is NOT likely to ask, however, is the following:

  • How might the IAM platform perform during nightly batch processes?
  • I wonder when would be the best time to take the HR system offline for scheduled maintenance and backups?
  • What if I forget to tell anyone when I upgrade the HR platform or extend the schema?
  • What happens if I want to set up some new test processes in my production environment?
  • Do you think it would be OK for me to delete and reload the entire employee table each night (don’t laugh – I’ve seen this one!)?

Lately I’ve been revisiting the fundamental application-driven principles I’ve taken for granted as being the basis for all good IAM solutions, and asking this question:

“What if (unexpected) sh*t happens?”

The question kind of answered itself, and quickly became this:

“Given that it is inevitable that (unexpected) sh*t will happen, who’s responsible for dealing with it?”

Just quietly, I may have been guilty in the past of taking the high ground (subconsciously at least) by thinking to myself (perhaps even out loud) “That’s not a matter for the IAM solution to deal with it – all problems must surely be addressed at the root!”.  Yet most enterprise environments are always in a state of flux, and it doesn’t really help anyone by ducking the problem when it might turn out that you are best equipped to make a difference in this regard – with a just bit of planning and lateral thinking.  This is not to say that the source systems are absolved from responsibility – far from it – but rather that it is better to be pro-active and take preventative action to avert a problem that has possibly yet to be seriously considered.

At this month’s (April 2016) MIMTeam User Group Skype Meeting I am presenting the topic of “Watch out for that Iceberg!”.  In this session (which includes a demo of a repeatable  MIM approach I wish to share with you), I will be asking what can we (as IAM consultants, solution architects and implementers) do to protect our customers or our own companies from unwanted SoT changes?  In particular, how can we be prepared for when unwanted changeshappen in large volumes, and wreak havoc on the unsuspecting systems and processes you’ve painstakingly aligned with that SoT?  What does the term “resilience” mean when used in the context of your IAM solution?

Please join me on the call (see when this is in my timezone) – looking forward to sharing some thoughts and ideas on this topic.

Posted in Active Directory, FIM (ForeFront Identity Manager) 2010, MIM (Microsoft Identity Manager) 2016, Uncategorized | Tagged , , | 2 Comments

Using .Where instead of | Where-Object

I’ve been fighting a problem today whereby the PowerShell Where-Object commandlet was returning results of varying object types from the same XML document.  Specifically, when trying to check for the numbers of adds/deletes/updates from a CSExport xml file, where I had isolated the deltas to the following:

$deltas = $csdata.SelectNodes("//cs-objects/cs-object[@object-type='$addObjectClass']/pending-import/delta")

If I then used the Where-Object construct as follows:

$deletes = $deltas | Where-Object {$_."operation" -eq "delete"}

… I would either get an object back of type System.Xml.XmlLinkedNode or System.Xml.XmlNodeSet.  Because I simply wanted to access the Count method of the XmlNodeSet, this was failing (returning nothing) when the result was of type System.Xml.XmlLinkedNode. In looking at the available methods of my $deltas System.Xml.XmlNodeSet variable I noticed the Where method … another piece of pure gold!  The syntax is slightly different if I use this:

$deletes = $deltas.Where({$_."operation" -eq "delete"})

… BUT, the result is always System.Object[].

This means that regardless of what XML I am confronted with, the most reliable means of counting the number of changes is to use the “Where” method of the System.Xml.XmlNodeSet class.

One step closer to a reliable means of consistently counting Pending Import and Pending Export changes from either my CSExport or Run Profile audit drop (xml) files:).

Posted in Event Broker for FIM 2010, FIM (ForeFront Identity Manager) 2010, MIM (Microsoft Identity Manager) 2016, Uncategorized, XML Programming | 1 Comment

Using -ReadCount 0 with Get-Content

Not that I’m an expert in PowerShell by any means, but here’s my tip of the day … use the -ReadCount parameter with Get-Content!

From the Get-Content page on TechNet …

-ReadCount<Int64>
Specifies how many lines of content are sent through the pipeline at a time. The default value is 1. A value of 0 (zero) sends all of the content at one time.
This parameter does not change the content displayed, but it does affect the time it takes to display the content. As the value of ReadCount increases, the time it takes to return the first line increases, but the total time for the operation decreases. This can make a perceptible difference in very large items.

The line in bold above for me was pure gold!  Right now I am working with parsing very large #FIM2010/#MIM2016 audit drop files to check change thresholds, and PowerShell is my tool of choice, not just because it integrates natively with MIM Event Broker (this idea will be the subject of another post a bit later).

Example for me just now with a 100Mb XML file:

  • 5 mins 17 seconds to load WITHOUT specifying any -ReadCount, with a memory footprint growth to 3.5 Gb
  • 39 seconds with -ReadCount 0, and a memory footprint growth of only 750 Mb.

Importantly, be aware that this setting is not appropriate when your script is still under construction, when the value should still be 1 (default).  In my case I found that stepping through my script in debug with this set to 0 resulted in painful delays reloading variable XML variables.  I am now thinking of using a $debug variable to toggle between 0 and 1 as appropriate – it should definitely be 0 once the script has been deployed.

So when loading large XML files in future I will certainly be using -ReadCount 0 unless I have a good reason not to – one that is worth taking 8 times longer and 5 times the resources:).

Posted in Event Broker for FIM 2010, FIM (ForeFront Identity Manager) 2010, MIM (Microsoft Identity Manager) 2016, Uncategorized | Leave a comment

#FIM2010 MIISActivate – FIM Sync service terminated with service-specific error %%-2146234334

Just posted by Peter Geelen – thought this worthy of a reblog for the #FIM2010 and #MIM2016 community.

Identity Underground

This article has been posted on TNWiki at: FIM2010 Troubleshooting: MIISActivate – FIM Sync service terminated with service-specific error %%-2146234334.


Situation

Failing over a FIM Sync Server to the standby FIM sync server using MIISActivate.

After using successfully MIISActivate, the FIMSync Service fails to start and logs an error in the eventviewer.


Symptoms

You’ll see 2 error messages in the event viewer, erro 7024 and error 6324.

Error 7024

Reference

This error is pretty similar or exactly like the error described in the following Wiki article:

FIM2010 Troubleshooting: FIM Sync service terminated with service-specific error %%-2146234334.

Screen

Error message Text

Log Name: System
Source: Service Control Manager
Date: 3/02/2016 15:08:59
Event ID: 7024
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer: servername.domain.customer
Description:
The Forefront Identity Manager Synchronization Service service terminated with service-specific error %%-2146234334.
Event Xml:
<Event xmlns=”http://schemas.microsoft.com/win/2004/08/events/event”>
<System>
<Provider Name=”Service Control Manager” Guid=”{555908d1-a6d7-4695-8e1e-26931d2012f4}” EventSourceName=”Service Control Manager”…

View original post 735 more words

Posted in FIM (ForeFront Identity Manager) 2010, ILM (Identity Lifecycle Manager) 2007 | Leave a comment

The (#FIM2010) service account cannot access SQL Server …

Ran into this old chestnut just now and thought that it was worth re-visiting the outcome of an old forum post on the subject.

Before I get to the point, by way of background I always start out the installation process with a quick sanity check:

  1. Create a UDL file on the FIM Sync server desktop
  2. Configure the UDL file to connect to the SQL instance you are targeting
  3. Test for connectivity success

The above will ensure you can at least get to “first base” with SQL connectivity, negotiating firewall and network issues.

When installing the FIM Sync service any number of connectivity issues can prevent you progressing through the installer wizard.  For instance, if you’ve got a remote SQL database and you’ve forgotten to install the appropriate SQL Native Client then you will be stuck on the page configuring the SQL connection.

Once you get past this problem it’s generally onto the next … the configuration of the FIM Sync service account.  The full text of the error you might run into is this:

The service account cannot access SQL server. Ensure that the server is accesible, the service account is not a local account being used with a remote SQL server, and that the account doesn’t already have a SQL login.

The error text can be quite misleading – because (as was the case with the linked thread) the problem can be the installer access itself.  The installer account (not the service account itself) MUST be a member of the SQL sysadmin role to have any hope of progressing beyond this point.  Generally you will want to (or be asked to!) remove this access after a successful install.

Thanks to those who bother contributing answers to the TechNet forums – they are incredible time savers, often long after the threads are closed.

Posted in Uncategorized | 3 Comments

#FIM2010 R2 Scoped Sync Rules – Part 2 (The Experience)

So I decided to take up the challenge on a recent FIM2010 R2 project – outlined in the first part of this post.

Lets just say there are plenty of FIM folk who would simply ask ‘why?’ …

  • Why would i want to even try working with declarative rules at all?
  • Why if something isn’t broken (rules extensions) why fix it?
  • Why do you think it will give a better outcome?
  • Why do you think scoped rules will work when the alternative type promised so much but failed so spectacularly?
  • Why would you want to put yourself through the wringer when you could fail and bring your project down with it?

Well for a variety of reasons let’s just imagine for a moment i had convincing answers for each of these that struck such a chord with you that you just want to read on and found out how i did it. Maybe lets come back to the above at the end. Rest assured, however, that i was not completely convinced myself, and at the outset i still had a bet each way on me failing. So here goes ….

No De-provisioning

Firstly i knew that for this approach to work i couldn’t de-provision – that is to say, disconnect objects from the Metaverse and cause deletions or something similar to happen for any of my connected systems.

If you expect your SRs to do this for you then you will need the traditional ERE model. However, when I looked closely at requirements that might on face value appear to require this capability, I found that in each case the need wasn’t really there at all. For starters, for systems which are not authoritative sources of identity, it is usually a bad idea for you to leave a CS entry as a disconnector. Doing this can leave you with “reverse join” problems if you subsequently need to re-connect. Equally deleting the target object never seemed to be an option generally because of the risk of compromising the target downstream system (e.g. Orphaned ACLs in AD, or SharePoint documents or sites without owners).

I reasoned that choosing not to disconnect at all was the better option. Yes this could lead to “bloat” issues if left unchecked over a long time. However, the alternative of trying to control the deletion/archive process from FIM is often impractical. I adopted the standard alternative to deprovisioning AD accounts, disabling and moving them to a ‘disabled users’ container, and leaving it to the AD system admins to handle the deletion and archive process – usually after a delay of a number of months. I also figured that if at some stage I needed to handle the archiving as part of the FIM design, then this could be comfortably achieved by an out-of-band PowerShell script, e.g. initiated as a post-processing step after an export run profile is executed.

So … No de-provisioning? No problem.

Avoiding Rules Extensions

As soon as you know you’ve got to handle anything other than the most basic of transformations, you find yourself drifting inextricably towards writing these things. So my strategy was to keep these as simple as possible by maximising direct flow rules.

If you want to sync to an LDAP style directory target, then the best choice of an authoritative source is also a directory structure – ideally at least vaguely close to the target schema. But how do you achieve this when your source system(s) are invariably relational systems rather than directory structures? The answer is to re-imagine your relational data as if it was an LDAP directory.

In order to explain the approach, consider a simple relational database with the following entities in an imaginary student management (SMS) system:

  • Student – 1000s of individuals, each belonging to one or more classes
  • Class – 100s of classes, each belonging to a single year
  • Year – 10s of years
  • Teacher – 10s of teachers, each assigned to one or more classes

Each entity is related to one or more of the other entities via a database foreign key constraint. The SMS relational structure for these entities would therefore look something like this:

  • Student <=> Class (generally physically stored as Student <= StudentClass => Class)
    • Class => Year
    • Class => Teacher

Our target Metaverse might have corresponding resource types as follows:

  • Student
    • Classes (multiple)
  • Class
    • Teacher (single)
    • Year (single)
    • Students (multiple)
  • Teacher
    • Classes (multiple)
  • Year
    • Classes (multiple)

In order to generate as many direct attribute flows as possible, what must happen is that the connector space schema for the SMS management agent must align itself as closely as possible to the Metaverse, if not mirror it exactly. The trick to doing this is to use an LDAP schema for your CS, which means one thing – converting foreign key relationships into distinguished name collections. In the above structure we could achieve this as follows:

  • UID=<StudentID>,OU=Students
    • UID=<ClassCode>,OU=Classes,OU=<Year>,OU=Year
  • UID=<ClassCode>,OU=Classes,OU=<Year>,OU=Year
    • UID=<TeacherID>,OU=Teachers
    • UID=<StudentID>,OU=Students
  • UID=<TeacherID>,OU=Teachers
    • UID=<ClassCode>,OU=Classes,OU=<Year>,OU=Year
  • OU=<Year>,OU=Year
    • UID=<ClassCode>,OU=Classes,OU=<Year>,OU=Year

There is no right/wrong here when it comes to inventing a DN structure – just that it should allow the CS to mirror the Metaverse such that attribute flows in/out of it are direct, or at worst simple transformations. Most importantly, the reference attribute flows must almost always be direct. Furthermore, if you found yourself having to transform multi-value attributes do then not only would scoped sync rules not be for you, but more than likely that the traditional ERE style would be no good to you either!

So as you can see, by imagining your source system as an LDAP structure such as the above this makes the sync design quite straight-forward. This lends itself nicely to scoped sync rules.

Of course if you have a tool that allows you to easily

  • Build consistent LDAP schema for your FIM connectors
  • Replicate changes from your source systems through this structure and into FIM
  • Allow for bi-directional flow
  • Combine multiple data sources (e.g. Text file and/or SQL and/or PowerShell and/or Web Service) in a single connector space

… then that tool (let’s call it UNIFY Identity Broker, because that is its name) drives a consistent, highly performed, and highly maintainable set of FIM connectors.

In my latest solution ALL of my FIM management agents besides the AD and FIM connectors were instances of Identity Broker connectors. Of these, most accessed the connected system via a PowerShell layer.

Using Out-of-Band Processes

When there is simply no FIM function available to perform a transformation, then the problem with scoped sync rules is that you can’t employ workflow parameters to pass in data constructed by custom workflow activities. This means you either have to resort to rules extensions (which I was determined NOT to do), or think outside the square a little. Three scenarios come to mind.

  1. Generating a unique account name and email alias (e.g. John.Smith1).
    In the days before the declarative model, this process was always achieved with provisioning rules extensions. With ERE-style declarative came the ability to use custom workflow activities, but these tended to become problematic under a number of well documented use cases. Now with scoped sync rules I had to come up with another way of doing this. We tried a couple of ideas, but ended up settling on using a PowerShell management agent to work in harmony with the standard AD management agent, and this worked a treat:

    1. Initial flow rules removed from the AD sync rules completely, leaving it to join and perform persistent flow rules only;
    2. Account (and optional mail alias) creation was performed entirely by a PowerShell MA, which used LDAP lookups on the target AD forest(s) to arrive at a unique value and insert what was effectively a “stub account” immediately (no initial password);
  2. Setting the initial password and notifying the manager in an email.
    1. An extension to the above was to set the initial password in a PowerShell workflow activity, and pass the value back to a WorkflowData variable to allow this to be included in an email notification.
    2. Once the password was set a “PasswordIsSet” flag on the account was set to TRUE which was tied to the EAF for userAccountControl in the AD sync rule to allow the AD account to be activated only once there was a password assigned.
      This allowed us an alternative to the workflow parameter approach used with the ERE style sync rules.
  3. Setting an AD extension attribute value to the Base64 encoded value of the AD GUID.
    Performing this task is easy in a rules extension, but impossible with scoped sync rules given the available function set. However, this could be performed as either a secondary step in the “set password” workflow, or as a post-processing PowerShell task which searched the target FIM OU for accounts with a missing extensionAttributeXX value and set the value. Either way, this did the trick.

    There were a number of other variations on the above ideas used at various times in the design, but the above 3 are the main ones that spring to mind. These are enough to make the point – that if you’re willing to work to the limitations of scoped sync rules by employing methods such as the above, then your FIM sync design ends up with no rules extensions – and no EREs either!

Summary

No doubt there will be some times when you have requirements which will prevent you from using scoped declarative rules. As mentioned in Part 1, there are a couple of check-points you need to cover off before you should be confident of proceeding any further, and these I have attempted to cover. In my case I was able to design and (with the help of my able colleagues) implement a reasonably complex FIM sync solution based entirely on scoped sync rules.

In my last post on this topic I plan to reflect on the overall result and all those ‘why?’ questions. I’ll also share a utility I used to troubleshoot objects that hadn’t had the expected sync rules applied as expected. With the ERE model you can see the sync rule has been physically attached to the target – but scoped sync rules have no such indicator, making troubleshooting much more difficult without the aid of a new tool. I’ll also share with you a couple of FIM sync rule bugs I uncovered but was happily able to work-around while the problems are fixed by Microsoft in the fullness of time.

Posted in FIM (ForeFront Identity Manager) 2010 | Tagged , , , | Leave a comment

#FIM2010 R2 Scoped Sync Rules – Part 1 (The Vision)

There have been numerous attempts since the concept of ‘declarative sync rules’ was first introduced with FIM 2010 to eliminate the need for rules extensions altogether, but rarely have these been successful.  In all but the most trivial of scenarios we find ourselves resorting to writing .Net code, when we invariably run into the now well-documented limitations of this type of approach to custom sync rules.  On top of this, when using the original MPR-based rules, the extra sync overhead of the expected rule entry (ERE) processing drove most of us to distraction, especially in the build phase where sync rules are always evolving, not to mention when large numbers of sync objects were involved.

At TEC in San Diego in 2012, some years after the inception of the declarative model, David Lundell presented his FIM R2 Showdown — Classic vs. Declarative presentation.  Despite his protestations to the contrary, many present left his entertaining presentation firmly of the view that the traditional model won hands down.  The argument in favour of traditional went something like this:

  • Declarative will only work at best 8 times out of 10 (see the links below for the main scenarios where these fall short);
  • In cases where it does not work the options are custom rules extensions and/or custom workflow activities;
  • Even if there is only one case where declarative doesn’t cut it, we are left with business rules to maintain in more than one place;
  • Given that most would consider a single place to maintain sync rules to be more than highly desirable, why bother at all with declarative given that you can always build 100% of your sync rules with rules extensions?

The Microsoft FIM product group had invested a lot of energy in bringing the whole declarative concept to fruition, and are not about to give up any time soon.  There has been a steely resolve to make a success of this approach, due mainly to feedback from MIIS/ILM customers and prospects before FIM that the biggest weakness of the product was that you couldn’t actually provision anything without writing at least some .Net code, no matter how small.  To their credit they took this and other feedback like it on board, and responded with the concept of a ‘scoped sync rule’ alternative with FIM2010 R2.

Those of us were not so jaded by our own forays into the declarative world to ‘throw in the towel’ by this point took some interest in this development.  Of all of my own experiences with declarative rules, it was the ERE which frustrated me (and my customers) the most.  In one particular site, the slightest rule change would always meant many hours (even days) of sync activity to re-baseline the sync service had to follow.  When this time exceeded available change windows, I couldn’t help but feel at least partially responsible for the administrators’ pain.  Given I had done my share of MCS FIM projects where the declarative model was actually mandated (a case of where the sales pitch had often set unrealistic expectations with the customer), it was clear to me then that Microsoft wasn’t going to give up on the idea, so I might as well try to ‘get with the programme’.  Consequently on the next major MCS project I embarked on, I was determined to revisit David’s TEC presentation to see if it might be possible to finally achieve what had become something of a FIM ‘holy grail’ – 100% declarative sync.

I have previously read about others’ experiences in this, including the following posts:

However, in my mind at least, all of these had a common underlying sentiment … “nice try, but no cigar”.  What is more, none of these seemed to talk in any depth (if at all) about the ‘scoped’ alternative to the standard ERE-driven model.

Staring at me in the face now was what initially appeared to be a typical FIM sync scenario – with some complexities only emerging well after the initial design was settled:

  • Approximately 10-20K user objects under management
  • Authoritative HR source (SAP), with extended ‘foundation’ object classes (position, department, cost centre, job class, etc.)
  • Provisioning and sync to Active Directory (2 legacy AD forests in a trust relationship, with a new forest to come online at some point in the future)
  • AD group membership provisioning based on foundation data references
  • A hybrid user mailbox provisioning requirement (users split between Office 365 and on premise 2010 Exchange)
  • Provisioning to a legacy in-house access management system (via a SharePoint 2007 list)
  • Sync with an externally hosted call management system (provisioning will eventually follow in a subsequent phase)
  • Office 365 license assignment
  • Notification workflows
  • Write-backs to HR (email, network ID)

With the voices of many nay-sayers ringing in my ears, I remained quietly confident I could pull this off, by taking the following line of thought:

  • So long as I didn’t need to disconnect (de-provision) any objects under sync, I could work with scoped SRs and avoid any use of EREs;
  • If I developed a consistent object (resource) model in the FIM service, modelled heavily on the inherent HR structures and relationships, I would be able to engineer the same consistency in the FIM Metaverse and each connector space;
  • By investing in each extensible connector design (I had 5 of these) I would ensure that I presented data in the same consistent structure, maximising the chances of ‘direct’ attribute flows both inbound (IAF) and outbound (EAF);
  • By taking any complexities known to be beyond the SR capabilities outside of the scope of the FIM sync process itself (due mostly to its limited function set), either within the connector import/export process itself, or in a pre/post sync ‘out of band’ process; and
  • Making heavy use of PowerShell (all 5 extensible management agents being instances of a PowerShell connector, as well as all pre/post sync processing).

In the next post I will cover how I went about building to the above principles, and some of the challenges I encountered along the way.  Without giving the game away entirely, all I will say at this point is that for every challenge there was always a work-around – the question was always going to be if any one of these would force me to write any .Net code.

Posted in FIM (ForeFront Identity Manager) 2010 | Tagged , , , | 1 Comment