#FIM2010 MIISActivate – FIM Sync service terminated with service-specific error %%-2146234334

Just posted by Peter Geelen – thought this worthy of a reblog for the #FIM2010 and #MIM2016 community.

Identity Underground

This article has been posted on TNWiki at: FIM2010 Troubleshooting: MIISActivate – FIM Sync service terminated with service-specific error %%-2146234334.


Failing over a FIM Sync Server to the standby FIM sync server using MIISActivate.

After using successfully MIISActivate, the FIMSync Service fails to start and logs an error in the eventviewer.


You’ll see 2 error messages in the event viewer, erro 7024 and error 6324.

Error 7024


This error is pretty similar or exactly like the error described in the following Wiki article:

FIM2010 Troubleshooting: FIM Sync service terminated with service-specific error %%-2146234334.


Error message Text

Log Name: System
Source: Service Control Manager
Date: 3/02/2016 15:08:59
Event ID: 7024
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer: servername.domain.customer
The Forefront Identity Manager Synchronization Service service terminated with service-specific error %%-2146234334.
Event Xml:
<Event xmlns=”http://schemas.microsoft.com/win/2004/08/events/event”>
<Provider Name=”Service Control Manager” Guid=”{555908d1-a6d7-4695-8e1e-26931d2012f4}” EventSourceName=”Service Control Manager”…

View original post 735 more words

Posted in FIM (ForeFront Identity Manager) 2010, ILM (Identity Lifecycle Manager) 2007 | Leave a comment

The (#FIM2010) service account cannot access SQL Server …

Ran into this old chestnut just now and thought that it was worth re-visiting the outcome of an old forum post on the subject.

Before I get to the point, by way of background I always start out the installation process with a quick sanity check:

  1. Create a UDL file on the FIM Sync server desktop
  2. Configure the UDL file to connect to the SQL instance you are targeting
  3. Test for connectivity success

The above will ensure you can at least get to “first base” with SQL connectivity, negotiating firewall and network issues.

When installing the FIM Sync service any number of connectivity issues can prevent you progressing through the installer wizard.  For instance, if you’ve got a remote SQL database and you’ve forgotten to install the appropriate SQL Native Client then you will be stuck on the page configuring the SQL connection.

Once you get past this problem it’s generally onto the next … the configuration of the FIM Sync service account.  The full text of the error you might run into is this:

The service account cannot access SQL server. Ensure that the server is accesible, the service account is not a local account being used with a remote SQL server, and that the account doesn’t already have a SQL login.

The error text can be quite misleading – because (as was the case with the linked thread) the problem can be the installer access itself.  The installer account (not the service account itself) MUST be a member of the SQL sysadmin role to have any hope of progressing beyond this point.  Generally you will want to (or be asked to!) remove this access after a successful install.

Thanks to those who bother contributing answers to the TechNet forums – they are incredible time savers, often long after the threads are closed.

Posted in Uncategorized | 3 Comments

#FIM2010 R2 Scoped Sync Rules – Part 2 (The Experience)

So I decided to take up the challenge on a recent FIM2010 R2 project – outlined in the first part of this post.

Lets just say there are plenty of FIM folk who would simply ask ‘why?’ …

  • Why would i want to even try working with declarative rules at all?
  • Why if something isn’t broken (rules extensions) why fix it?
  • Why do you think it will give a better outcome?
  • Why do you think scoped rules will work when the alternative type promised so much but failed so spectacularly?
  • Why would you want to put yourself through the wringer when you could fail and bring your project down with it?

Well for a variety of reasons let’s just imagine for a moment i had convincing answers for each of these that struck such a chord with you that you just want to read on and found out how i did it. Maybe lets come back to the above at the end. Rest assured, however, that i was not completely convinced myself, and at the outset i still had a bet each way on me failing. So here goes ….

No De-provisioning

Firstly i knew that for this approach to work i couldn’t de-provision – that is to say, disconnect objects from the Metaverse and cause deletions or something similar to happen for any of my connected systems.

If you expect your SRs to do this for you then you will need the traditional ERE model. However, when I looked closely at requirements that might on face value appear to require this capability, I found that in each case the need wasn’t really there at all. For starters, for systems which are not authoritative sources of identity, it is usually a bad idea for you to leave a CS entry as a disconnector. Doing this can leave you with “reverse join” problems if you subsequently need to re-connect. Equally deleting the target object never seemed to be an option generally because of the risk of compromising the target downstream system (e.g. Orphaned ACLs in AD, or SharePoint documents or sites without owners).

I reasoned that choosing not to disconnect at all was the better option. Yes this could lead to “bloat” issues if left unchecked over a long time. However, the alternative of trying to control the deletion/archive process from FIM is often impractical. I adopted the standard alternative to deprovisioning AD accounts, disabling and moving them to a ‘disabled users’ container, and leaving it to the AD system admins to handle the deletion and archive process – usually after a delay of a number of months. I also figured that if at some stage I needed to handle the archiving as part of the FIM design, then this could be comfortably achieved by an out-of-band PowerShell script, e.g. initiated as a post-processing step after an export run profile is executed.

So … No de-provisioning? No problem.

Avoiding Rules Extensions

As soon as you know you’ve got to handle anything other than the most basic of transformations, you find yourself drifting inextricably towards writing these things. So my strategy was to keep these as simple as possible by maximising direct flow rules.

If you want to sync to an LDAP style directory target, then the best choice of an authoritative source is also a directory structure – ideally at least vaguely close to the target schema. But how do you achieve this when your source system(s) are invariably relational systems rather than directory structures? The answer is to re-imagine your relational data as if it was an LDAP directory.

In order to explain the approach, consider a simple relational database with the following entities in an imaginary student management (SMS) system:

  • Student – 1000s of individuals, each belonging to one or more classes
  • Class – 100s of classes, each belonging to a single year
  • Year – 10s of years
  • Teacher – 10s of teachers, each assigned to one or more classes

Each entity is related to one or more of the other entities via a database foreign key constraint. The SMS relational structure for these entities would therefore look something like this:

  • Student <=> Class (generally physically stored as Student <= StudentClass => Class)
    • Class => Year
    • Class => Teacher

Our target Metaverse might have corresponding resource types as follows:

  • Student
    • Classes (multiple)
  • Class
    • Teacher (single)
    • Year (single)
    • Students (multiple)
  • Teacher
    • Classes (multiple)
  • Year
    • Classes (multiple)

In order to generate as many direct attribute flows as possible, what must happen is that the connector space schema for the SMS management agent must align itself as closely as possible to the Metaverse, if not mirror it exactly. The trick to doing this is to use an LDAP schema for your CS, which means one thing – converting foreign key relationships into distinguished name collections. In the above structure we could achieve this as follows:

  • UID=<StudentID>,OU=Students
    • UID=<ClassCode>,OU=Classes,OU=<Year>,OU=Year
  • UID=<ClassCode>,OU=Classes,OU=<Year>,OU=Year
    • UID=<TeacherID>,OU=Teachers
    • UID=<StudentID>,OU=Students
  • UID=<TeacherID>,OU=Teachers
    • UID=<ClassCode>,OU=Classes,OU=<Year>,OU=Year
  • OU=<Year>,OU=Year
    • UID=<ClassCode>,OU=Classes,OU=<Year>,OU=Year

There is no right/wrong here when it comes to inventing a DN structure – just that it should allow the CS to mirror the Metaverse such that attribute flows in/out of it are direct, or at worst simple transformations. Most importantly, the reference attribute flows must almost always be direct. Furthermore, if you found yourself having to transform multi-value attributes do then not only would scoped sync rules not be for you, but more than likely that the traditional ERE style would be no good to you either!

So as you can see, by imagining your source system as an LDAP structure such as the above this makes the sync design quite straight-forward. This lends itself nicely to scoped sync rules.

Of course if you have a tool that allows you to easily

  • Build consistent LDAP schema for your FIM connectors
  • Replicate changes from your source systems through this structure and into FIM
  • Allow for bi-directional flow
  • Combine multiple data sources (e.g. Text file and/or SQL and/or PowerShell and/or Web Service) in a single connector space

… then that tool (let’s call it UNIFY Identity Broker, because that is its name) drives a consistent, highly performed, and highly maintainable set of FIM connectors.

In my latest solution ALL of my FIM management agents besides the AD and FIM connectors were instances of Identity Broker connectors. Of these, most accessed the connected system via a PowerShell layer.

Using Out-of-Band Processes

When there is simply no FIM function available to perform a transformation, then the problem with scoped sync rules is that you can’t employ workflow parameters to pass in data constructed by custom workflow activities. This means you either have to resort to rules extensions (which I was determined NOT to do), or think outside the square a little. Three scenarios come to mind.

  1. Generating a unique account name and email alias (e.g. John.Smith1).
    In the days before the declarative model, this process was always achieved with provisioning rules extensions. With ERE-style declarative came the ability to use custom workflow activities, but these tended to become problematic under a number of well documented use cases. Now with scoped sync rules I had to come up with another way of doing this. We tried a couple of ideas, but ended up settling on using a PowerShell management agent to work in harmony with the standard AD management agent, and this worked a treat:

    1. Initial flow rules removed from the AD sync rules completely, leaving it to join and perform persistent flow rules only;
    2. Account (and optional mail alias) creation was performed entirely by a PowerShell MA, which used LDAP lookups on the target AD forest(s) to arrive at a unique value and insert what was effectively a “stub account” immediately (no initial password);
  2. Setting the initial password and notifying the manager in an email.
    1. An extension to the above was to set the initial password in a PowerShell workflow activity, and pass the value back to a WorkflowData variable to allow this to be included in an email notification.
    2. Once the password was set a “PasswordIsSet” flag on the account was set to TRUE which was tied to the EAF for userAccountControl in the AD sync rule to allow the AD account to be activated only once there was a password assigned.
      This allowed us an alternative to the workflow parameter approach used with the ERE style sync rules.
  3. Setting an AD extension attribute value to the Base64 encoded value of the AD GUID.
    Performing this task is easy in a rules extension, but impossible with scoped sync rules given the available function set. However, this could be performed as either a secondary step in the “set password” workflow, or as a post-processing PowerShell task which searched the target FIM OU for accounts with a missing extensionAttributeXX value and set the value. Either way, this did the trick.

    There were a number of other variations on the above ideas used at various times in the design, but the above 3 are the main ones that spring to mind. These are enough to make the point – that if you’re willing to work to the limitations of scoped sync rules by employing methods such as the above, then your FIM sync design ends up with no rules extensions – and no EREs either!


No doubt there will be some times when you have requirements which will prevent you from using scoped declarative rules. As mentioned in Part 1, there are a couple of check-points you need to cover off before you should be confident of proceeding any further, and these I have attempted to cover. In my case I was able to design and (with the help of my able colleagues) implement a reasonably complex FIM sync solution based entirely on scoped sync rules.

In my last post on this topic I plan to reflect on the overall result and all those ‘why?’ questions. I’ll also share a utility I used to troubleshoot objects that hadn’t had the expected sync rules applied as expected. With the ERE model you can see the sync rule has been physically attached to the target – but scoped sync rules have no such indicator, making troubleshooting much more difficult without the aid of a new tool. I’ll also share with you a couple of FIM sync rule bugs I uncovered but was happily able to work-around while the problems are fixed by Microsoft in the fullness of time.

Posted in FIM (ForeFront Identity Manager) 2010 | Tagged , , , | Leave a comment

#FIM2010 R2 Scoped Sync Rules – Part 1 (The Vision)

There have been numerous attempts since the concept of ‘declarative sync rules’ was first introduced with FIM 2010 to eliminate the need for rules extensions altogether, but rarely have these been successful.  In all but the most trivial of scenarios we find ourselves resorting to writing .Net code, when we invariably run into the now well-documented limitations of this type of approach to custom sync rules.  On top of this, when using the original MPR-based rules, the extra sync overhead of the expected rule entry (ERE) processing drove most of us to distraction, especially in the build phase where sync rules are always evolving, not to mention when large numbers of sync objects were involved.

At TEC in San Diego in 2012, some years after the inception of the declarative model, David Lundell presented his FIM R2 Showdown — Classic vs. Declarative presentation.  Despite his protestations to the contrary, many present left his entertaining presentation firmly of the view that the traditional model won hands down.  The argument in favour of traditional went something like this:

  • Declarative will only work at best 8 times out of 10 (see the links below for the main scenarios where these fall short);
  • In cases where it does not work the options are custom rules extensions and/or custom workflow activities;
  • Even if there is only one case where declarative doesn’t cut it, we are left with business rules to maintain in more than one place;
  • Given that most would consider a single place to maintain sync rules to be more than highly desirable, why bother at all with declarative given that you can always build 100% of your sync rules with rules extensions?

The Microsoft FIM product group had invested a lot of energy in bringing the whole declarative concept to fruition, and are not about to give up any time soon.  There has been a steely resolve to make a success of this approach, due mainly to feedback from MIIS/ILM customers and prospects before FIM that the biggest weakness of the product was that you couldn’t actually provision anything without writing at least some .Net code, no matter how small.  To their credit they took this and other feedback like it on board, and responded with the concept of a ‘scoped sync rule’ alternative with FIM2010 R2.

Those of us were not so jaded by our own forays into the declarative world to ‘throw in the towel’ by this point took some interest in this development.  Of all of my own experiences with declarative rules, it was the ERE which frustrated me (and my customers) the most.  In one particular site, the slightest rule change would always meant many hours (even days) of sync activity to re-baseline the sync service had to follow.  When this time exceeded available change windows, I couldn’t help but feel at least partially responsible for the administrators’ pain.  Given I had done my share of MCS FIM projects where the declarative model was actually mandated (a case of where the sales pitch had often set unrealistic expectations with the customer), it was clear to me then that Microsoft wasn’t going to give up on the idea, so I might as well try to ‘get with the programme’.  Consequently on the next major MCS project I embarked on, I was determined to revisit David’s TEC presentation to see if it might be possible to finally achieve what had become something of a FIM ‘holy grail’ – 100% declarative sync.

I have previously read about others’ experiences in this, including the following posts:

However, in my mind at least, all of these had a common underlying sentiment … “nice try, but no cigar”.  What is more, none of these seemed to talk in any depth (if at all) about the ‘scoped’ alternative to the standard ERE-driven model.

Staring at me in the face now was what initially appeared to be a typical FIM sync scenario – with some complexities only emerging well after the initial design was settled:

  • Approximately 10-20K user objects under management
  • Authoritative HR source (SAP), with extended ‘foundation’ object classes (position, department, cost centre, job class, etc.)
  • Provisioning and sync to Active Directory (2 legacy AD forests in a trust relationship, with a new forest to come online at some point in the future)
  • AD group membership provisioning based on foundation data references
  • A hybrid user mailbox provisioning requirement (users split between Office 365 and on premise 2010 Exchange)
  • Provisioning to a legacy in-house access management system (via a SharePoint 2007 list)
  • Sync with an externally hosted call management system (provisioning will eventually follow in a subsequent phase)
  • Office 365 license assignment
  • Notification workflows
  • Write-backs to HR (email, network ID)

With the voices of many nay-sayers ringing in my ears, I remained quietly confident I could pull this off, by taking the following line of thought:

  • So long as I didn’t need to disconnect (de-provision) any objects under sync, I could work with scoped SRs and avoid any use of EREs;
  • If I developed a consistent object (resource) model in the FIM service, modelled heavily on the inherent HR structures and relationships, I would be able to engineer the same consistency in the FIM Metaverse and each connector space;
  • By investing in each extensible connector design (I had 5 of these) I would ensure that I presented data in the same consistent structure, maximising the chances of ‘direct’ attribute flows both inbound (IAF) and outbound (EAF);
  • By taking any complexities known to be beyond the SR capabilities outside of the scope of the FIM sync process itself (due mostly to its limited function set), either within the connector import/export process itself, or in a pre/post sync ‘out of band’ process; and
  • Making heavy use of PowerShell (all 5 extensible management agents being instances of a PowerShell connector, as well as all pre/post sync processing).

In the next post I will cover how I went about building to the above principles, and some of the challenges I encountered along the way.  Without giving the game away entirely, all I will say at this point is that for every challenge there was always a work-around – the question was always going to be if any one of these would force me to write any .Net code.

Posted in FIM (ForeFront Identity Manager) 2010 | Tagged , , , | 1 Comment

A new angle on an old #FIM2010 problem

Anyone working with the FIM Sync engine, in its current or previous guises, for any length of time will be familiar with the age old dilemma – how best to ensure uniqueness constraints of a newly provisioned AD account.  Or a mailbox alias for that matter.

In a standard FIM Sync design, in particular one where there is an authoritative source of identity such as an HR system, there are standard considerations must be observed, the basics of which are explained here.  What makes this particularly challenging where FIM is responsible for assigning unique values in AD (based on a supplied algorithm, e.g. surname+initial+number) are a number of significant constraints such as the following:

  1. Organisations continue to insist on user-friendly account names.
    In student or staff management systems there is generally already an enforced unique numeric ID that would be perfect for sAMAccountName, userPrincipleName, mailNickname, CN, or all 4.  Yet people insist on employing age-old algorithms based on functions of surname, given name and initials, adding unwanted complexity and ongoing management overhead to your IdM solution.  Such values invariably change during the identity lifecycle at some point (i.e. are not immutable), making manual intervention at some stage practically unavoidable.
  2. FIM needs to have knowledge of all existing user accounts before it can provision a new one.
    It is rare that anyone would really ever want to include the full OU tree in scope of the FIM ADDS MA.  However that is effectively what you have to do so that you can implement your uniqueness algorithm in FIM logic.
  3. FIM may need to enforce uniqueness across multiple AD domains and forests.
    Particularly where a hybrid cloud/on-premise user synchronisation scenario exists with a single tenant, it is no longer satisfactory to enforce uniqueness in a single AD forest alone.
  4. Scoped declarative sync rules cannot be used to generate a unique value.
    These don’t support parameters, so you can’t write your clever workflow activity and pass a value in like you might have done in the days when EREs were the only declarative approach available.  In FIM2010 we’re still limited to a very small set of functions, and while calculating a random number might still be possible, implementing the type of complex rules most organisations exist on is effectively impossible today.
  5. ERE-style declarative sync rules cannot reliably be used to generate a unique value.
    If you insist on going down this path then yes, there is a way to use a custom workflow activity to pass in a sync rule parameter value.  But don’t expect this to work when FIM is under even slight duress (see #6 below), and certainly don’t expect the calculated value to be guaranteed unique – especially if you are dealing with latency in writing the value out to AD.
  6. When used with the FIM sync engine, the FIM service is a poor option for calculating a unique value in general.
    When processing even a moderate volume of concurrent requests invoking a FIM workflow activity designed to calculate a unique value, the FIM service will invariably cause requests to fail with the dreaded Denied error.  The uniqueness enforced in the RCDC for AccountName might work OK for new user records created in the FIM portal, but not so for new ones created either by the sync service or imported in bulk via the FIM API.  You may also think that bypassing the sync engine altogether to generate your unique account name might be an option – and if you do you are at the same point I got to – but if you do you will need to almost reinvent the very good, robust wheel that is the sync engine when it comes to making your value ‘stick’ in AD.
  7. Enforcing consistency between sAMAccountName and userPrincipleName (UPN) suffix.
    It may not even be important to many people, but I always figured that it would at least be desirable that these two values were the same.  Of course they can’t be if you can’t limit your values to less the 20 character limit of the sAMAccountName property, meaning that the other 44 available characters available to you in the UPN will invariably go to waste.  I figure that the ADUC console encourages consistency, so why shouldn’t FIM?
  8. Enforcing consistency between UPN and email.
    In a lot of Office365 implementations that pre-dated the “alternate login attribute” concept, it was a requirement for SSO that these values be identical.  This is made additionally challenging when multiple UPN suffixes exist to choose from.
  9. Enforcing uniqueness beyond sAMAccountName and userPrincipleName.
    It is rarely enough that you can get away with not considering either or all of the other standard AD attributes too – namely CN, displayName and mailNickname.  Even if you do get lucky and achieve agreement to use the student ID or employee ID as the login name, you’ll generally need to come up with a more friendly email alias.  A unique CN becomes important if you are required to locate your account in different OUs depending on identity state information (e.g. employeeStatus).  Then there’s displayName – FIM hates it when you can’t enforce uniqueness here – and I thoroughly agree with Brad Turner’s 2007 argument here.

I think it is fair to say that in over a decade of working with this toolset, nothing has emerged that stands out as a more consistent way of addressing all of the above concerns than the original MIIS developers’ guide examples from the CHMs that used to come with the product.  I have used an approach similar to this one many times with great success – only to find that customers do not want to have to edit your C# or VB.Net code years later (assuming they can put their hands on it) just to change the number of digits appended to the end of surname+initial from 2 to 3 because they never expected volumes to rise like they had.  As someone with a .Net developer background, like some of you reading this article, I would happily keep things this way if the customer was happy with the approach.  Invariably, they are not.

So – how do we go about designing a FIM solution that covers off points 1-9 above (as well as others I simply can’t think of right now) – and NOT write any .Net code, while still using the FIM sync engine for all AD sync and provisioning?  And no, without any 3rd party tool or CodePlex library either?

Well yes you’re partially right if you’re thinking PowerShell, and I alluded to it above in point #6.  But rather than running a PowerShell activity from within a FIM workflow, you can instead use your favourite PowerShell connector to do this instead.  And all with scoped declarative sync rules and not a rules extension in sight – in fact this is probably only the 3rd time I’ve managed to create the entire sync solution without a rules extension.  The other 2 were with my company’s own xml-based ‘codeless’ extensions, so I guess this makes it the first in the purest sense.

The idea is simple enough – use a scoped SR with initial flow to provide the attributes that are the inputs of your uniqueness algorithms – but applied to a PowerShell MA/connector instead of the ADDS MA.  This gives you the freedom to do your uniqueness checks on the whole directory tree if you don’t want to bring in all OUs within the scope of the ADDS MA(s) you will still need to use to apply all your standard attribute flows best handled this way.  You can tie the two together with an inbound flow rule from AD which can unambiguously join on an attribute value that was written to the AD account created by your PowerShell MA.  At this point you can also move your account from the default OU that the ‘stub’ account was placed in originally.

So that’s the point of this post really.  I’m really letting you know that by combining 2 relatively new tools (well let’s face it nothing FIM is ever bleeding edge is it?) in (your favourite) PowerShell MA and FIM2010 R2 scoped declarative rules, you have all that you need to architect a maintainable FIM sync solution that ticks all the boxes.  Not only that but you’ll also find your customer is happy there is no .Net code to maintain, and you will be happy in the knowledge that whoever comes along after you to extend the solution will thank you for making their job a whole lot easier.  Sure – you are making yourself less indispensable, but then again with the future of MIM2015 and AADSync being a complete absence of custom rules extensions, you’re also creating a more future-proof solution on which to base repeatable business.

Posted in FIM (ForeFront Identity Manager) 2010 | Tagged , , , | Leave a comment

Key #FIM2010 Principles for the New Year and the #MSMIM2015 Timeframe

It’s been an eventful couple of months leading up to Christmas for me, starting with the MVP conference in Redmond and followed closely by my company UNIFY’s 10th anniversary, at which I was taken by surprise to be honoured as the first UNIFY 10-year employee. Although I can’t remember a word of my acceptance speech, I won’t forget the evening in a hurry, and how proud I felt to be part of an exceptional selection of IAM professionals carving a niche in the Aus/NZ identity and access market, not only in FIM/MIM, but also in other complementary IAM technologies such as Azure, Ping and Optimal. It was a nice touch by one of my Novell-inspired colleagues who presented a different take on our theFIMTeam.com brand.

Since then I have been heavily involved in a couple of large-scale FIM deployments and this will continue in the new year with a major project to use FIM2010 to replace an ailing access provisioning system. I will be drawing on all 10 years of my ILM/FIM experience with this one as the project seems to take on something new by the day, but I’m looking forward to sharing the challenge with a more than capable team assembled for the task. This project really brings together so many key concepts as to how to approach identity and access life-cycle provisioning that I thought I’d share the main ones here, as they will remain as relevant as ever while we roll into a new year and the pending MIM2015 timeframe.

  1. Achieving stakeholder accord across multiple platforms and programmes.
    Especially in an enterprise environment, believing that you can plow ahead and eventually win the naysayers over is foolhardy and disrespectful.  Everyone is entitled to their opinion, and engaging with them all early is vital to share ideas and draw on experiences to avoid pitfalls of the past.  Knowing where to respectfully hold your ground is just as important to acknowledging and embracing a superior alternative approach.
  2. Understanding the target environment and culture
    Sure there are systems to integrate, but always keep in mind the people that have to deal with them day by day, and understand the impact of changes you will invariably introduce.  While you may see yourself as the harbinger of change, others may measure the success of your project in the exact opposite!
  3. Maintaining clarity of vision
    Don’t take on any more than you can handle in the timeframe allowed.  There is always more to do, and pressure to try to accommodate everyone’s needs and ideas at once.  Identify what is paramount for an initial successful deployment, and build your strategy from that.  Don’t eliminate anything, but clearly lay out a roadmap and identify a timeframe for each targeted requirement.
  4. Integrating processes not just data
    Think about on-boarding, moves, and off-boarding.  Extend this thinking to edge case scenarios such as rehires and elevated duties.  Think about the events that drive changes, and work to see how you can best leverage these; maybe not just the ones that are happening now but perhaps those also falling in the near future.
  5. Provisioning relationships not just identities
    Especially when working with FIM, or MIM later this year, resist pressures not to surface key relationships between data entities that you will need to drive policy.  Rather than caving in to working with ‘flat’ data structures where every piece of information is a string attribute of a user, point out the benefits of modelling a simplified uniform data structure in FIM.  Demonstrate that by maintaining and honouring these relationships when synchronising entities between multiple systems, not only do you ensure referential integrity, minimise sync times, and avoid error, but you also provide the mechanisms you need to add value in FIM in terms of policy.
  6. Responding to changes in a timely manner
    I will come back to this below …
  7. Honouring multiple authoritative sources
    Rarely is one platform or system 100% authoritative for all entities and attributes in a synchronisation/replication model.  Acknowledge this up front by identifying the processes in connected systems, rather than just the data, that might come into conflict when automated sync comes into play.  Build flexibility in your model to adapt to changes as they invariably evolve, along with collective understanding.
  8. Planning for the future
    Further point #3, we are doing our job well if we are building a strong foundation for future identity and access management initiatives and requirements.  Don’t lock your customer into something that will not allow them to adapt as their business evolves any more than is absolutely necessary.

I know there are even more, but the above stand out to me as critical to success as I face the busy months ahead.  I have posted on some of these before, and find myself coming back to them over and over.

I am presenting the January 2015 FIMTeam UG session in a couple of weeks (yes, even though many of you are still on holidays).  In this session I will be addressing point #6 above.  Those of you that know me will understand that this is a passion of mine, and for good reason.  I really need you all to understand how FIM sync can be “uplifted” in a way you may never thought possible in order to deliver not only to SLAs, but also to people’s true expectations of a modern identity life-cycle management solution.  Looking forward to your company – but if you miss it you will be able to view the recording at your leisure from the above link.

Happy 2015 everyone – may it be the best ever.

Posted in Event Broker for FIM 2010, FIM (ForeFront Identity Manager) 2010 | Tagged , , , | Leave a comment

#FIM2010 Run on Policy Update saves the day

As this old 2009 post on the Bobby and Nima blog attests to, there is often value in turning on the ROPU setting on a FIM2010 workflow – even if it’s only temporarily.

My use case is a workflow which adds a sync rule to a target user object to write back an email address to an HR system … in this case actually this means creating a contact record with the new email.  During testing I had found that bulk emails initiated from an HR platform to live users when their email was set had the potential to be career limiting – and I needed to introduce an override concept.  This I did by implementing a sync rule parameter and testing for the presence of a value in the supplied parameter in the EAF for email.  If a value was set I would use that instead of the email bound to my user object. Simple enough idea, and did the trick nicely.  That is until the default email value for existing users needed changing …

Changing the parameter on the workflow is the obvious first step – but this of course didn’t affect the existing EREs.  Enter ROPU.  Simply disabling and re-enabling my MPR re-triggered my workflow (set transition IN) for every user in scope of my ResourceFinalSet – sure this was a lot of activity in a short period of time, but it did the trick.

Note that I find it is ALWAYS good practice to remove any existing SR as the first step of my workflow before adding any new SRs … otherwise you can get the same SR added many times over.

Posted in FIM (ForeFront Identity Manager) 2010 | Leave a comment