SCCM (current branch) offline upgrade adventures (1511 to 1606)

Those who have read my blog for some time now, know that our system is a high security, air-gapped network. This can sometimes make administration of the system…. interesting, to say the least. Most frustrating is when applications take Internet connectivity for granted (which unfortunately, is becoming more and more common).

Our latest adventure is working on upgrading System Center Configuration Manager from 2007 R2 (!) to SCCM (current branch). We are in the middle of an install into our test system, where we perform all tasks as if the network has no external connectivity, just like the real thing. Now I have a total love/hate relationship with SCCM… in that I love it when it works, but it’s a total POS when it doesn’t. And right now… I HATE YOU MICROSOFT.

Now, prior to the release of version 1606 as a standalone installer in October 2016, the only way to install SCCM 1606 was to first install 1511 and perform the upgrade within the SCCM console.

We configured the Service Connection Point to operate in offline mode, in order to simulate the real-world environment. Next, one has to wait SEVEN DAYS before any telemetry* is made available. Once you have waited the requisite time, one must then use the Service Connection Tool to export the telemetry and import the updates. Now, this is where the fun really starts….

Fail one: the online documentation for the Service Connection Tool is only for version 1606; there doesn’t appear to be any way to view historical documentation. This is a problem because one of there features described doesn’t even exist in the 1511 version of the tool: the ability to set a proxy connection. WHAT?!? We get stuck trying to connect, only to find out that the only way to get the data was to send one of the engineers home to pull it from their personal network…. after an hour spent wondering why the command wouldn’t take… So now I’m asking if there is actually a company out there, that uses SCCM, and DOESN’T have a proxy? Ya, I couldn’t think of one either. Supposedly, this was fixed in 1606… we’ll see….

Fail two: once we actually had the data, the Service Connection Tool is used to upload the data to the site server, which it did without complaint. The only problem is, the updates never appeared in the Updates and Servicing tab of the console. This was much harder to resolve. Looking in the dmpdownloader.log file, we noticed this:

dmp_log_1

Hmmm…. interesting. Drilling into the directory, both the ConfigMgr.Update.Manifest.cab file and the other CABs where there, just ONE LEVEL DEEPER than where SCCM expected them to be. C’MON GUYS… did you actually test your tools before releasing them? Anyway, moving the CABs up one level to where SCCM expected them to be fixed the issue.

Hopefully, this will help one of the other 0.5% of SCCM users out there who aren’t actually connected to the Internet (and Microsoft doesn’t give a crap about)…. you’re welcome.

* Fail three: we’re also not allowed to pull data down from the air-gapped system and move it to the Internet… ever. This presented a real problem with SCCM, since it now requires telemetry to be sent back in order to receive updates. Microsoft has been less than helpful with this issue, assuring us that the data (some of which is processed via a one-way hash) contains no sensitive information… which I find highly amusing, since there is no way to independently verify the hashed data. We’re supposed to just trust them…. riiiight….. We worked around this by creating a “dummy” site, which is configured in a similar fashion to provide fake telemetry data, which is then used as a surrogate for the real site. Time will tell if this is genius or blows up in our face!

Advertisements

Realtime MVC ModelState errors during debugging

While working on my most recent ASP.NET MVC project, I was having a tough time figuring out why my ModelState was invalid. It was due to the fact that I set the error messages to “Required” or “Invalid” for ALL of my model properties (to support the way the view presents the error messages). In retrospect, this probably wasn’t the smartest choice. Oh well, now I’m stuck figuring out another way to determine the properties causing Model validation to fail.

With Visual Studio 2015, I discovered a marvelous way to get the names of the properties with invalid values… buy using a watch! Watches now support lambda expressions (as long as there are no external dependencies, such as LINQ to SQL), so I thought… this would be a good time to test that functionality.

Since I was interested in the actual name of the property that was causing
ModelState.IsValid to return false, I used the following watch:

ModelState.Where(x => x.Value.Errors.Count > 0).Select(x => x.Key).ToList()

For those who don’t usually use watches, you may not know that a watch can be created directly in the watch window by just typing it in! That is what you will need to do here, since I’m guessing that this value is most likely not in your code…

Untitled

If you get a message under Value saying “This expression causes side effects and will not be evaluated,” just click the refresh button to the right of the message to force-evaluate the watch. Once you do so, you’ll be able to expand the watch object to see your offenders!

Untitled

Pretty neat!

Listing installed applications on Server Core

If you have SCCM installed, a WMI object is created that provides an inventory of applications installed on a particular machine (and can be retrieved using PowerShell):

get-wmiobject -class Win32Reg_AddRemovePrograms

If you don’t have SCCM installed, the most reliable way to get this information is directly from the registry at:

HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall

and

HKLM\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall

Though I don’t personally bother with using them, there are some places around the Interweb where others have rolled the registry walk into a ps script.

I would NOT recommend using Win32_Product as it forces re-registration of installed applications, which is slow and may lead to undesirable second-order effects.

It is unfortunate that M$ has yet to develop a simple command line (or PowerShell) option for retrieving information as basic as this…

allowDefinition=’MachineToApplication’ error after enabling MVC compile-time view checking

Two posts in one day?  Yesh… I’m just trying to make up for my extended hiatus from blogging!

Actually, this post supplements the previous one, but I think it is the more important of the two since I don’t believe there is a complete solution to this problem is anywhere else on the interweb (except for my answer on stackoverflow, which has ZERO upvotes… not that I’m counting LOL).

Here’s our scenario:

  1. The developer wants compile-time checking on views, so they set MvcBuildViews=true.
  2. The application builds fine, UNTIL they publish the project.
  3. Subsequent attempts to build the project result in a compile-time error: It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. This error can be caused by a virtual directory not being configured as an application in IIS.

So what causes this issue? When the project is published the compiler, by default it uses <project-dir>\obj\ to place copies of the source files that it will work with. Unfortunately, these files are not automatically deleted when publishing is complete. The next time the developer compiles the project with MvcBuildViews=true, it will error out because the aspnet compiler includes the obj\ folder during compilation, since it is underneath the <project-dir> folder.

So how do we fix this? Well, you have four options:

  1. Set MvcBuildViews=false. I don’t really consider this a solution, so let’s move on.
  2. Delete the files in <project-dir>\obj\. Works, but can be a hassle since it has to be done after every publish.
  3. Change the path that publishing uses as an intermediate directory through the use of the <BaseIntermediateOutputPath> property in your project config file. See this link.
  4. Add a new section in your project config file that deletes the offending files for you on build (reference Microsoft Connect). I’ve even made it easy for you, just copy and paste:
    <PropertyGroup>
    <_EnableCleanOnBuildForMvcViews Condition=" '$(_EnableCleanOnBuildForMvcViews)'=='' ">true</_EnableCleanOnBuildForMvcViews>
    </PropertyGroup>
    <Target Name="CleanupForBuildMvcViews" Condition=" '$(_EnableCleanOnBuildForMvcViews)'=='true' and '$(MVCBuildViews)'=='true' " BeforeTargets="MvcBuildViews">
    <ItemGroup>
        <_TempWebConfigToDelete Include="$(BaseIntermediateOutputPath)**\Package\**\*" />
        <_TempWebConfigToDelete Include="$(BaseIntermediateOutputPath)**\TransformWebConfig\**\*" />
        <_TempWebConfigToDelete Include="$(BaseIntermediateOutputPath)**\CSAutoParameterize\**\*" />
        <_TempWebConfigToDelete Include="$(BaseIntermediateOutputPath)**\TempPE\**\*" />
    </ItemGroup>
    
    <Delete Files="@(_TempWebConfigToDelete)"/>
    </Target>

My recommendation would be to use either option 3 or 4.

ASP.NET MVC compile-time view checks in Visual Studio 2012

Anyone who has worked with MVC has suffered through runtime view errors.  This is because views don’t get compiled until IIS renders them.  In order to check for successful compilation, you can:

  1. Manually visit every view in the browser (easy but tedious)
  2. Create automated UI tests (harder)
  3. Turn on compile-time view checks (just easy)!

Since I like just easy, I’m going to show you how to turn on compile-time view checks:

  1. Unload the project (right-click on the project in Solution Explorer and select Unload Project
  2. Right-click the project again and select Edit *.csproj
  3. In the PropertyGroup section, add: <MvcBuildViews>true</MvcBuildViews>
  4. Save the csproj file and reload the project
  5. Eat a banana (optional)

Now, when running the build, visual studio will compile all of your views as well, and identify any errors.

IIS 7/.NET 4 System.DirectoryServices: The (empty) search filter is invalid

This is a silly error, but it has caught me a couple of times.  Surprisingly, there doesn’t seem to be a blog anywhere that talks about this specific issue.

Situation: you have an ASP.Net 4+ application running on IIS 7.  You navigate to the page and get a server error:

Untitled

Specifically, “The (&(objectCategory=user)(objectClass=user)(|(userPrincipalName=)(distinguishedName=)(name=))) search filter is invalid.”  Note, that if you don’t have the pdb deployed, your source error will not show the actual error line, but rather “An unhanded exception was generated during the execution of the current web request.  Information regarding the origin and location of the exception can be identified using the exception stack trace below.”

This can be particularly vexing if application works on your development machine, but not in production.

Cause: The LDAP lookup is failing because your directory requires authentication, and you’re running an anonymous session with a local computer account.

Fix: In IIS, turn off Anonymous Authentication and turn on Windows Authentication instead.

What the hell happened to StackExchange?

Earlier this morning, I was doing some work with XSLT and had a question on version support in .NET. I found the answer I needed on StackOverflow. What bothered me was the fact that the question was closed as being off-topic.

 So I went onto meta, and asked why this was. The short answer was that just because a question is popular, doesn’t necessarily mean that it is on topic. I can’t fault that logic.  But this is where I have an issue: I think that the StackExchange moderators have become so rigid in their rule enforcement, that we have lost sight of the forest for the trees. 

 In the response to my meta post, I was told: “The reason StackExchange sites are as useful to their users as they are is the rigidity with which we enforce these rules. If we simply allow the community to decide, we would end up with a site full of lol-cats and other popular content. The line has to be drawn somewhere.”  Why are we drawing the line where we do now?  I’m totally for weeding out lol-cats, but do you really think that this question is just as bad?

 It just feels that at this point, its dogma. It just seems that moderators have now become site police and any slightest deviation will be dealt with harshly. Having used StackExchange since very early in its life, I guess I’m just used to it by now. But how do new users feel, when there is a critic standing by for every question you write? It used to be a place where people could openly ask questions and things were more relaxed. Now I’m not so sure.

I don’t know what the answer is. As a guy with only a couple thousand rep on Server Fault, I guess my opinion doesn’t really matter anyway.

ASP.NET 4.5 MVC Bundling and Minification troubleshooting

ASP.NET 4.5 includes the System.Web.Optimization namespace (it can also be added via NuGet).  This library provides functionality to bundle and minify (B/M) your scripts and stylesheets.

Running some tests today, I noticed that my css image links were breaking whenever I enabled the B/M optimizations. Researching this, I realized I was using relative paths for the images in my CSS files. When B/M occurs, the pseudo path is used to resolve to the B/M’d files, which is usually different from the physical path of the files. In my case, the physical path was at ~/public/css and the B/M pseudo path was ~/bundle/css. Since my images are located in ~/public/images, the relative paths were no longer resolving.

Thinking this would be an easy fix, I changed the B/M pseudo path to mirror the physical path ~/public/css. Unfortunately, this broke my css entirely. Checking the server response, I saw that I was getting a 403.14 error.

Turns out the MVC router blocks calls for B/M files that resolve to actual paths in the project. This is expected behavior, since the router is always called first and is simply doing it’s job (duh)!

The easiest fix for this is to simply change the pseudo path to one level lower than the actual path (in my case, changing it to ~/public/css/bundle). The CSS file will then properly resolve all the relative image paths, regardless of whether or not B/M is on.

Javascript Intellisense in Visual Studio 2012

After years of use, I tend to accumulate a lot of cruft on my work computer hard drives, so whenever I replace a machine or hard drive I like to start fresh and only pull down old files from the backup server as I need them.  Last week, I replaced the hard drive on my laptop, so I had to redo all of the settings on Visual Studio.  As I had forgotten how to do this, I figured I should write it down to help me next time… perhaps it will help you too?

When developing Visual Studio MVC projects, if you’re like me, you like to move directories around to make more sense of the layout (instead of the default structure that M$ gives you).  An example of this is creating a public directory to hold your publicly available files, like styesheets and scripts.  You’ll also notice that when you move your Javascript libraries, Intellisense breaks.  This happens whenever the _references.js file is moved from its expected location*.

To fix, simply go to Tools | Options | Text Editor | JavaScript | IntelliSense References.  Switch to the Implicit (Web) Reference Group and add a new reference that resolves to the new location of the _references.js file:

VSJSOptions

Note that if you want to use a path relative to your project, it has to be entered into the text box at the bottom, in the form of “~/whatever_dirs_below_project_root”.

* If you want Intellisense support for your Javascript libraries, you need to be sure that the library name is included in the _references.js file.  This is not all, however.  VS also needs a vsdoc.js file that provides the particular data needed by Visual Studio Intellisense (e.g. jquery-1.10.0-vsdoc.js supports jquery-1.10.0.js).  Note that you should NOT include the vsdoc file name in the _references.js file, just the main file.  Intellisense vsdoc files for jquery are available on the ASP.NET CDN at http://www.asp.net/ajaxlibrary/cdn.ashx.

Don’t buy stuff from the Microsoft Store… ever

I consider myself a pretty reasonable guy.  I try to be polite to everyone and slow to get angry, and it is definitely out of character for me to write about a bad experience.  But the jokers at Microsoft’s online store made me do it…

I just wanted to give you some advice that will save you heartache later on: don’t buy stuff from the Microsoft Store… ever.

Back on May 23, I ordered a Microsoft Surface Pro from the online store for my son, who wanted it for college classes this summer.  The website didn’t give any indication it was out of stock, back-ordered, etc., so I thought this would be the easiest way for me.  Fine, I ordered it and paid for next day shipping.  Now understand, there is Microsoft kiosk at a mall just 15 miles from my house, so I did this thinking it would save me time and a trip.  Boy, was I so wrong on both counts.

Fast forward to May 28th.  My son is starting class on the following day and still no Surface.  So, I called store support and talked to Gail who told me that the order was still pending.  Her explanation was that there was a problem with my credit card?!?  Now, I pay off my credit cards every month… this is obviously not the problem. Whatever. I asked her to just cancel my order and I would go to the kiosk to buy one.  After a few minutes of convincing her that I really wanted to do this, she says the order was canceled.  I then went down to the kiosk that night with my wife and picked up a Surface (with the same credit card I used for the online order, no less).

Problem solved… or so I thought.

I get an email today… TODAY, June 12th telling me:

Thank you for ordering from Microsoft. At your request, we attempted to cancel the following products from your order. Unfortunately, we were unable to stop shipment and you will be charged for these items.

Really? TWO WEEKS after I canceled my order, you can’t stop a shipment from happening?  So now I have to spend even more time, being a jerk to people that are just trying to do their job.

Wonder why people give Microsoft a bad rap?  I’d say they earn it all by themselves.

T

P.S.  Sorry to make you endure this rant on a technical blog.  I just needed somewhere to vent.  I’m off to dispute the charge.  Thanks for reading.

Breakpoints not hitting Visual Studio Unit Test projects

I’ve switched over to working some software development for the past few months and haven’t been too active on this blog.  Seems that there are plenty of people WAY smarter than me when it comes to coding, so I don’t usually have too much to say about the stuff (other than that I’m a good faker coder).  Anyway, was working on some unit tests for a particular class in my current project and for some reason, breakpoints were never hitting in my tests.  The solution would build without error, but the breakpoints always said ‘symbols not loaded’.

This was really driving me crazy, and since I thought I screwed up the build configuration, I ended up tearing out the test project and putting it back in, with no success.  Then looking at Test Explorer, I noticed that the unit tests were failing.  Looking at the error message, it turned out that the signature the test method with the [ClassInitialize] attribute had an incorrect signature (specifically, it needed to be public static void and have one parameter of the TestContext type).

So, long story short, a bad setup in your unit tests won’t necessarily prevent it from building, but will almost certainly prevent your tests from running.  And the Text Explorer will be more than happily tell you that you are stupid.

Delay your Outlook Outbox

Not my usual fare for this blog, but it may be useful for someone else:

I had a user come in today and say she accidentally did a “Reply All” on an email and wanted to recall the message.  After showing her how to do it, I remembered that me and the guys had set a one minute delay on our outboxes to help save us from these awww, crap! moments.  It has saved our skins on a few occasions.  Problem was, I forgot how we did it!

So, here it is.  This is cited directly from Microsoft’s Outlook support site:

Delay the delivery of all messages

  1. Click the File tab.
  2. Click Manage Rules and Alerts.
  3. Click New Rule.
  4. In the Step 1: Select a template box, under Start from a Blank Rule, click Apply rule on messages I send, and then click Next.
  5. In the Step 1: Select condition(s) list, select the check boxes for any options that you want, and then clickNext.

If you do not select any check boxes, a confirmation dialog box appears. If you click Yes, the rule that you are creating is applied to all messages that you send.

  1. In the Step 1: Select action(s) list, select the defer delivery by a number of minutes check box.
  2. In the Step 2: Edit the rule description (click an underlined value) box, click the underlined phrase a number of and enter the number of minutes for which you want the messages to be held before sending.

Delivery can be delayed up to 120 minutes.

  1. Click OK, and then click Next.
  2. Select the check boxes for any exceptions that you want.
  3. Click Next.
  4. In the Step 1: Specify a name for this rule box, type a name for the rule.
  5. Select the Turn on this rule check box.
  6. Click Finish.

After you click Send, each message remains in the Outbox folder for the time that you specified.

We find that a one minute delay is just long enough to catch a bad email, yet short enough that it didn’t annoy us.  To stop the email from being sent, simply open or delete it from your outbox.  I have an exception set on mine to immediately send high-priority messages (for when I’m just writing silly love notes to my wife, etc.).

KB2597166 – Microsoft Excel 2010 Security Update FAIL

** UPDATE-3 28 August 2012 ** Our internal tests show KB2598378 does in fact fix the issue.  Be sure to use the correct version (32 or 64 bit) depending on your Office version (NOT Windows).

** UPDATE-2 23 July 2012 ** Microsoft says this issue has been fixed in KB2598378.  We’re testing the fix at our shop now; let’s hope the second time’s the charm…

** UPDATE 17 June 2012 ** It appears that Microsoft has released a hotfix that resolves the issue (KB2598144).  We have NOT tested this hotfix, so I can’t say whether it works or not.  If anyone out there has, please let me know your results…

MS12-030 (KB2597166), was released on 8 May 2012.  This magical piece of crap will cause your Excel 2010 application to BREAK when users try to sort large data sets.  Indications include:

  • Large Operation warning box: “The operations you are about to perform affects a large number of cells and may take a significant amount of time to complete.”
  • Error: “Excel cannot complete the task with available resource.  Choose less data or close other applications.

We were able to recreate by selecting all cells and attempting a sort operation.

Word on the street is that Microsoft is aware of the issue but DOES NOT have a fix.  Their “workaround” is to not select the entire sheet prior to sorting.  My advice: DON’T INSTALL THE UPDATE (unless you like a bunch of angry users chasing you with pitchforks)…

If you happened to have already installed this patch, you can remove it using the following command line (helpful if you use automation):

msiexec /package {90140000-0011-0000-0000-0000000FF1CE} /uninstall {B76D8C6D-1F13-42A7-9931-D7504CB89D6D} /qn

Referenced assembly could not be resolved because it has a dependency which is not in the currently targeted framework

If you are using third-party development tools (like Ninject) with your .NET 4.0 project.  You may get the following error at compile time:

Warning: The referenced assembly "<Assembly>" could not be resolved because it has a dependency on "System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" which is not in the currently targeted framework ".NETFramework,Version=v4.0,Profile=Client". Please remove references to assemblies not in the targeted framework or consider retargeting your project.

This error is a result of .NET 4.0 targeting (by default) the .NET Framework 4 Client Profile.  This profile is a subset of the full profile and does not have the entire .NET class library.  To fix, go into Project Properties  | Application, and change the target framework to: .NET Framework 4.  Visual Studio will prompt you that your solution must be closed and reloaded.  Once you do this, the project should compile as expected.

SCCM 2007 Error 2302: SMS Distribution Manager failed to process package

You may encounter this error when trying to update a distribution point.  You may also get errors 2348 (failed to decompress).  This can be due to binary differential replication trying to send a corrupted package.

To fix this issue, disable binary differential replication, update and wait for your DPs to replicate.  This causes the ENTIRE package to redistribute (not just the deltas).  You can then safely turn binary differential replication back on.

Office 2010 GPO settings cause the Options menu to gray out

One of the networks that I manage does not have connectivity to the Internet. It also has Microsoft Office 2010 installed on all of the clients. As such, we had set up GPOs to turn off any functionality that requires Internet access. We discovered that doing this caused the File tab | Options menu to dim as well.  After process of elimination, I found that if the following policy is set, it also causes the Options menu to dim as well:

User Configuration | Administrative Templates | Microsoft Office 2010 | Disable Items in User Interface | Disable commands under File tab | Help

Specifically, if this policy is enabled and Office Center is checked, then the Options menu is dimmed as well.  I posted this just in case someone else out there encounters this “feature.”

T

Configuring an authoritative time source for your Windows domain

7 Mar 2013 Update: If you read this article, you’ll note that there is no mention of Group Policy.  Some of you have asked why that is (especially since I’m such a big fan of management via GPOs).  It’s because the time policies are only useful if you’re doing some type of non-standard configuration.  The only real configuration you need for a ‘typical’ time sync setup is for the DC with the PDC emulator role.  As such, there really is no reason to set Group Policy (well, you could for the PDC DC, but I think it’s kinda ridiculous to set up a complex GP for only one machine).  Besides, performing the settings in the registry ensures that the settings will persist, even if GPs fail to apply for some reason.  – Ed.

This is an article that I’ve been meaning to write for some time now, but always forgot. Well, this morning, we had a problem with one of our time servers which reminded me about this topic. I will show you how to properly configure time services for your Windows domain. While all of this information is already out on the Internet, it is located in many disparate sources; so this is my effort to give you a one-stop shop by providing comments where I thought the Microsoft article was lacking…

First, I have to go over a few caveats:

  • A typical domain uses the Windows Time Service (w32tm) to manage time synchronization within the domain. This service works fine for Kerberos (which is the primary reason we like to keep clients in sync). It is interesting to note that Windows doesn’t really care if the time on the domain is CORRECT, just IN SYNC (within a very generous tolerance). Although a properly configured time service will be very accurate, the precision of the time on clients can vary. So, you don’t want to use this method to sync the clock on stock trading workstations, for example. They need something more sophisticated, like dedicated NTP clients that sync to the time server directly.
  • Second, I am assuming that you want to sync to an external source (i.e. an Internet NTP server or your own hardware time server) so that your time reflects real-world time.

Step 1 – Configure your domain’s authoritative time server. For a domain, this will be the Domain Controller that holds the PDC emulator role. To find out which DC has this role, run netdom query fsmo at the command prompt.

The DC with ‘PDC’ is the one we’re interested in. We will now configure this DC as an NTP CLIENT. (Comment: There is some confusion of the meanings of NTP server versus client. In this case, you are configuring your Domain Controller (which happens to be a server), to use NTP as a client (we are consuming this service from an NTP SERVER). So, try not to get the terms mixed up! Just follow my directions, and you’ll be fine). Reference: Microsoft KB 816042:

  1. Change the server type to NTP. To do this, follow these steps:
    1. Click Start, click Run, type regedit, and then click OK.
    2. Locate and then click the following registry subkey:
      HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Parameters\Type
    3. In the right pane, right-click Type, and then click Modify.
    4. In Edit Value, type NTP in the Value data box, and then click OK.
  2. Set AnnounceFlags to 0xA (EDIT: There is some confusion about this setting.  Technet tells you to set this to 0x5, I recommend this be set to 0xA instead, see below). To do this, follow these steps:
    1. Locate and then click the following registry subkey:
      HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config\AnnounceFlags
    2. In the right pane, right-click AnnounceFlags, and then click Modify.
    3. In Edit DWORD Value, type A in the Value data box, and then click OK.
      Notes
      – If an authoritative time server that is configured to use an AnnounceFlag value of 0x5 does not synchronize with an upstream time server, a client server may not correctly synchronize with the authoritative time server when the time synchronization between the authoritative time server and the upstream time server resumes. Therefore, if you have a poor network connection or other concerns that may cause time synchronization failure of the authoritative server to an upstream server, set the AnnounceFlag value to 0xA instead of to 0x5.
      – If an authoritative time server that is configured to use an AnnounceFlag value of 0x5 and to synchronize with an upstream time server at a fixed interval that is specified in SpecialPollInterval, a client server may not correctly synchronize with the authoritative time server after the authoritative time server restarts. Therefore, if you configure your authoritative time server to synchronize with an upstream NTP server at a fixed interval that is specified in SpecialPollInterval, set the AnnounceFlag value to 0xA instead of 0x5 (Comment: AnnounceFlag settings are described in http://technet.microsoft.com/en-us/library/cc773263(WS.10).aspx#w2k3tr_times_tools_uhlp. Multiple flags are set by adding the hex values together. Based on the configuration example, I recommend setting this to 0xA)This is because you want to ensure that your domain always stays in sync, even if the NTP source(s) go offline.  By using 0xA, which is a combination of 0x08 and 0x02, you ensure that even if NTP is unavailable, the server still self-elects that it is the authoritative time source for the forest and will keep the domain in-sync with itself.)
  3. Enable NTPServer. To do this, follow these steps:
    1. Locate and then click the following registry subkey:
      HKEY_LOCAL_MACHINESYSTEM\CurrentControlSet\Services\W32Time\Providers\NtpServer
    2. In the right pane, right-click Enabled, and then click Modify.
    3. In Edit DWORD Value, type 1 in the Value data box, and then click OK.
  4. Specify the time sources. To do this, follow these steps:
    1. Locate and then click the following registry subkey:
      HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Parameters
    2. In the right pane, right-click NtpServer, and then click Modify.
    3. In Edit Value, type Peers (Comment: This is where you will enter the names of the time servers you will sync with. The best option is to have your own NTP server. Understanding that many don’t have the need (or money) to do this, connecting to an Internet NTP is your alternative. Whatever you do, DO NOT use time.windows.com. Think about it, Windows dominates the PC market, and all of these clients are configured, by default, to get their time from this site. Pick some other time servers (at least two): http://support.microsoft.com/default.aspx?scid=kb;EN-US;262680). in the Value data box, and then click OK.
      Note Peers is a placeholder for a space-delimited list of peers from which your computer obtains time stamps. Each DNS name that is listed must be unique. You must append ,0x1 to the end of each DNS name. If you do not append ,0x1 to the end of each DNS name, the changes made in step 5 will not take effect (Comment: Technet tells you to set the value to 0x1. But it’s more complicated than that. The value of the flag is dependent on how you want the server to be used. See the NtpServer registry value settings in http://technet.microsoft.com/en-us/library/cc773263(WS.10).aspx#w2k3tr_times_tools_uhlp. My recommendation is to set TWO servers, with your primary as 0x9 and your secondary as 0xA. A good article that describes setting up alternate time sources is located at: http://blogs.technet.com/b/askds/archive/2007/11/01/configuring-your-pdce-with-alternate-time-sources.aspx).
  5. Select the poll interval. To do this, follow these steps:
    1. Locate and then click the following registry subkey:
      HKEY_LOCAL_MACHINESYSTEM\CurrentControlSet\Services\W32Time\Providers\NtpClient\SpecialPollInterval
    2. In the right pane, right-click SpecialPollInterval, and then click Modify.
    3. In Edit DWORD Value, type TimeInSeconds in the Value data box, and then click OK.
      Note TimeInSeconds is a placeholder for the number of seconds that you want between each poll. A recommended value is 900 Decimal. This value configures the Time Server to poll every 15 minutes (Comment: If you’re using someone else’s NTP server, you might want to set this to >= 14400. Many public NTP servers will blacklist you if you try to sync too frequently, and word on the street is that the magic number is 4 hours between syncs…).
  6. Configure the time correction settings. To do this, follow these steps:
    1. Locate and then click the following registry subkey:
      HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config\MaxPosPhaseCorrection
    2. In the right pane, right-click MaxPosPhaseCorrection, and then click Modify.
    3. In Edit DWORD Value, click to select Decimal in the Base box.
    4. In Edit DWORD Value, type TimeInSeconds in the Value data box, and then click OK.
      Note TimeInSeconds is a placeholder for a reasonable value, such as 1 hour (3600) or 30 minutes (1800). The value that you select will depend upon the poll interval, network condition, and external time source (Comment: What they’re trying to tell you, is that the better the connection, the smaller you can set this value. If your local time exceeds this value, the time will not automatically set, but you will get an error in the event log. You will then need to manually set the time so that it is close to the time on the NTP server. The default is 54000 (15 hours).
    5. Locate and then click the following registry subkey:
      HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config\MaxNegPhaseCorrection
    6. In the right pane, right-click MaxNegPhaseCorrection, and then click Modify.
    7. In Edit DWORD Value, click to select Decimal in the Base box.
    8. In Edit DWORD Value, type TimeInSeconds in the Value data box, and then click OK.
      Note TimeInSeconds is a placeholder for a reasonable value, such as 1 hour (3600) or 30 minutes (1800). The value that you select will depend upon the poll interval, network condition, and external time source (Comment: This is the same as the prior setting, except going into the past).
  7. Quit Registry Editor.
  8. At the command prompt, type the following command to restart the Windows Time service, and then press ENTER:
    net stop w32time && net start w32time

Step 2 – Configure your domain clients to use domain time. To do this, join your computers to the domain. That’s right, you don’t do any configuration on your clients; they will, by default, connect to the PDC DC to get their time synchronized. I know it’s funny that I list this as a step, but surprisingly, a lot of people get hung up on this. Remember, only the PDC DC is an NTP client. Everyone else uses windows time…

eventlog Security Group for Windows Event Logs

I had a bit of a hard time with this one, so hopefully I can save someone else the trouble of finding this information…

We have a security requirement to configure the ACLs for event logs so their access is restricted.  In Windows 7/Server 2008, a new virtual account, “eventlog” is required to have full access to the logs to ensure proper functionality.

Since we configure the ACLs using Group Policy, I needed to include this as part of a file permission set.  In order to do this you must search for “NT SERVICE\eventlog” on the local machine.  You will not be able to locate the account any other way.

I suspect that this can also be configured using SDDL in the new event log GP Admin Templates, but haven’t had a chance to play with that.  If anyone has any experience with this policy, please link a post to my site…

jnlp (Java Network Launching Protocol) does not run from IE, only prompts to save

If you’re running a secure network, you may encounter a situation where you try to launch a Java web app (from an SSL session) and instead IE will only give you the option to save the jnlp file.  Assuming that your JRE version is current, this is likely due to the following Group Policy being enabled:

Computer Configuration | Administrative Templates | Windows Components | Internet Explorer | Internet Control panel | Advanced Page | Do not save encrypted pages to disk

This causes IE to block saving the jnlp file to the cache, which also precludes it from launching.

Force attempt to provision vPro AMT using SCCM in-band provisioning

If you don’t properly configure your workstation for vPro AMT provisioning before the first SCCM agent call (e.g. you forget to set your certificate thumbprint in MEBx), you’ll end up waiting 24 hours for the machine to reattempt provisioning.  If you’re impatient (like me) you can use this technique to force a reattempt immediately (credit to William York – original source):

Manual Steps to issue WMI command:

  • Open a command prompt and type wbemtestThis is the Windows Management Instrumentation Tester
  • After the Windows Management Instrumentation Tester Utility Opens, click Connect
  • In the Namespace of the Connect Window, type the system name you want to force the check followed by \root\ccmExample: **
  • Click Connect
  • You can also simply run the command on the local system by simply leaving out the host name
  • Example: \root\ccm
  • After you successfully connect to the target system, click the Execute Method Button
  • In the Get Object Path window, type sms_clientin the Object Path fieldClick OK
  • In the Execute Method Window, enter TriggerSchedulein the Method FieldClick the Edit In Parameters Button
  • In the Object editor for _PARAMETERS window, Double Click the sScheduleID in the Properties field
  • In the Property Editor Window, change the Value to Not NULL and add the following {00000000-0000-0000-0000-000000000120}This value is the Object ID to initiate this OOB auto-provisioning check.
  • Click the Save Property button
  • In the Object editor for _Parameters window, click the Save Object button
  • In the Execute Method window, click the Execute Button
  • After you Execute the method, you should see a message that the Method was executed successfully
  • To confirm that your method was executed, look at the target systems c:\windows\system32\CCM\Logs\oobmgt.logYou should now see a new entry in the log GetProvisioningSetting indicating that the policy has been re-evaluated.