Archive

Archive for the ‘SharePoint 2010 Maintenance’ Category

Fixed: SharePoint 2010 Calendars linked in Outlook 2010 Prompting for Credentials.

February 25, 2014 Leave a comment

I recently ran into an issue with users continuously getting prompted for credentials when linking SharePoint 2010 Calendars with Office 2010 Outlook.  Every time a user would click on a linked SharePoint 2010 calender within Outlook they were prompted to authenticate to a specific web front end server of the SharePoint Farm.  After troubleshooting I learned that the way SharePoint handles calls to the Outlook client is it handles the URL requests to Outlook in this sequence:

Intranet zone URL
Default zone URL
Extranet zone URL
Internet zone URL
Custom zone URL

So by default the SharePoint 2010 linked calendars look at the URL in the Intranet Zone AAM first, and will try to authenticate to that zone.  If there is a URL in the intranet zone where users are unable to authenticate they will be continuously prompted for credentials when accessing the linked calendar in outlook.

For my case I had a Intranet Zone URL AAM pointing to a specific web front end server http://servername:port so Outlook was trying to authenticate to that URL and couldn’t.

After removing the URL AAM for the Intranet Zone and leaving it blank SharePoint looks into the Intranet Zone, determines its blank,  and moves down the list to the Default zone.

After removing the Intranet Zone AAM users were no longer getting prompted for credentials.

If you experience this issue, make sure your Intranet Zone AAM is accessible by your users.

Migrating SharePoint 2007/2010 Databases to New SQL Server/Cluster Without Having to Completely Reconfigure SharePoint Utilizing SQL Alias

January 31, 2014 2 comments

I recently had to migrate from an old SharePoint 2007 SQL Server environment to a more stable/robust SQL Server 2008 R2 environment to stabilize and increase the performance of SharePoint. I needed to do this with little to no down time. I was able to accomplish this by utilizing SQL Aliases. Below are the steps I took to accomplish this. Steps work for both MOSS 2007 and SP2010.

For this example I will be using SQL1 as the name for the “old” SQL Server Instance, and SQL2 for the “new” SQL Server Instance.

1. Stop all SharePoint services that might be communicating with SQL1 back-end. This helps to prevent any writing to the databases while they are being backed up.
2. Backup all SharePoint 2007 databases including: (Config, SSPs, Search, and Content) from SQ1.
3. Move all SharePoint 2007 databases to SQL2.
4. Restore all SharePoint 2007 databases to SQL2.

On the SharePoint 2007 Server we now need to create a SQL Alias (Host File for SQL) to tell SharePoint to point to the new SQL2 instance. What we need to do here is use the same exact name as the “old” SQL Server Instance SQL1 as the SQL Alias name. This is so to trick SharePoint into thinking it’s still utilizing the same SQL1 SQL Server Instance, which will prevent us from having to completely reconfigure the SharePoint farm.

Next log into one of the SharePoint 2007/2010 servers and bring up the SQL Server Client Network Utility tool. This comes native with Windows so nothing needs to be installed.

1. Start, Run, Type – cliconfg
2. Select the Alias tab and click the Add… button
3. Enter in the Server Alias (This will be the name of your “old” SQL Server Instance – SQL1)
4. Select the TCP/IP radio button under Network Libraries
5. Enter in the new SQL Server name (SQL2) for the Server Name under Connection Parameters
6.  Click “Ok” and “Ok” to close out of the cliconfg window.
6. Restart all SharePoint 2007 Services.
7. Do an IISreset on WFE Servers.

Browse to your SharePoint environment to verify portal functionality.

That’s it pretty simple.

How to Fix: Failed to provision the SharePoint Central Administration Web Application After Patching and Running the Config Wizard.

December 23, 2013 Leave a comment

I recently updated our SharePoint 2010 environment to the latest SharePoint SP2 and October 2013 CU patches. After laying down the bits and executing the SharePoint Config Wizard the Config Wizard trucked along well up until Step 9 of 10, and then failed. Looking at the ERROR Log I saw this error.

“Failed to provision the SharePoint Central Administration Web Application.
Exception: System.ArgumentException: The IncomingURL is already present in the collection in a different zone. A URL may only map to one zone at a time. Use a different URL for requests from this zone”

What happed here is that when I initially configured my SharePoint Central Administration Web Application I created my Web Application in the default zone with the URL of (http://servername:port)

Later after configuring Central Administration I wanted to access it with a more friendlier host name i.e (http://cadmin). I changed the default zone to (http://cadmin) and I moved the (http://servername:port) to a custom zone.

When the SharePoint Config Wizard was ran on the SharePoint Central Administration server it was looking for the default zone of (http://servername:port) to successfully provision the Central Administration Web Application with the lastest SP2 and CU patches. Since it could not find that URL in the default zone it failed.

I had to go back into Central Administration, move the (http://servername:port) back to the default zone. After doing this I reran the SharePoint Config Wizard. This time all steps completely successfully, and the Central Administration Web Application was provisioned with the latest patches.

After successfully patching, I then moved the (http://cadmin) back to default zone.

If you run into this same issue just make sure the default zone is set to the initial URL set for the Central Administration Web Application, and then after patching you can switch it back to a more friendlier host name.

Hopefully this helps others.

Manage Deletion of Index Items in SharePoint 2010

May 13, 2013 8 comments

In the environment I work SharePoint Search is considered a number #1 service for our customers, so refining the Search Service with quick returns, and limited errors is one of my top priority.

As many might already know SharePoint 2010 Enterprise Search can sometimes become a “pain” in configuring to meet high demands, you sometimes need a full time job with Search just to get it environmentally accepted.

I think i’ve had to completely blow away search 5+ times already in my production environment due to all types of reasons,  and then having to recreate and configure.

With SharePoint 2010 Enterprise Search you might encounter Crawls (Incremental and Full) that return all kinds of errors.  What i’ve noticed is when there is tons of errors in a Search the crawls tends to complete slower, and usually take more time to complete.

My job was to get my Incremental crawls to return results in 15 minutes or less, every 15 mins.  I was able to accomlish this.

One of the major issues that was slowing my crawl rate down was the amount of errors that were returning during each crawl (Incremental and Full).    My farm has over 800,000+ searchable items, which is not a whole lot compared to larger organizations, and was returning 2000+ errors, which is not bad at all.

However having those 2000+ errors was spiking my incremental crawls above the 15 minute return window.

Looking at the Crawl Error Log I noticed that majority if not all the errors I was receiving was due to SharePoint 2010 Crawl not cleaning up deleting items in the Index.

Errors such as (Item not found or Access Denied) were showing up.  By default SharePoint 2010 Enterprise Search cleans up (deletes items) from the Index only  if that item is returned in 30 separate crawls (Incremental and Full) every 30 days.  So even if that item no longer exists SharePoint will still count it towards the crawl results as an error, which in return slows the crawl return rate down.

I wanted to remove these (Item not found or Access Denied) errors but did not want to have to click through every single error in my crawl log and select ‘Remove the item from Index’.  That would take forever.

Luckily in SharePoint 2010 they have now have allowed us to Manage the deletion of index items in search with the use of PowerShell.

http://technet.microsoft.com/en-us/library/hh127009(v=office.14).aspx

Below is how I utilized the PowerShell cmdlt to redefine Enterprise Search deletion policy to fit in my environment.

=========================================================================================================

Deletion policy name String Default value
Delete policy for access denied or file not found ErrorDeleteCountAllowed

ErrorDeleteIntervalAllowed

30

720 Hours (30 days)

Delete policy for all other errors ErrorDeleteAllowed

ErrorIntervalAllowed

100

1440 Hours (60 days)

Delete unvisited policy DeleteUnvisitedMethod 1
Re-crawl policy for SharePoint content RecrawlErrorCount

RecrawlErrorInterval

10

360 Hours (15 days)

Deletion policy name Description
Delete policy for access denied or file not found When the crawler encounters an access denied or a file not found error, the item is deleted from the index if the error was encountered in more than ErrorDeleteCountAllowed consecutive crawls AND the duration since the first error is greater than ErrorDeleteIntervalAllowed hours. If both conditions are not met, the item is retried.
Delete policy for all other errors When the crawler encounters errors of types other than access denied or file not found, the item is deleted from the index if the error was encountered in more than ErrorDeleteAllowed consecutive crawls AND the duration since the first error is greater than ErrorIntervalAllowed hours.
Delete unvisited policy During a full crawl, the crawler executes a delete unvisited operation in which it deletes items that are in the crawl history that are not found in the current full crawl. You can use the DeleteUnvisitedMethod property to specify what items get deleted. You can specify the following three values:

  • 0, all unvisited items are deleted.
  • 1 (default), unvisited items that have the same host as the start address specified in the content source are retained, and unvisited items that were discovered by following links to other hosts are deleted.
  • 2, none of the unvisited items are deleted.
Re-crawl policy for SharePoint content This policy applies only to SharePoint content. If the crawler encounters errors when fetching changes from the SharePoint content database for RecrawlErrorCount consecutive crawls AND the duration since first error is RecrawlErrorInterval hours, the system re-crawls that content database.

$SearchApplication = Get-SPEnterpriseSearchServiceApplication -Identity “<SearchServiceApplicationName>”
$SearchApplication.SetProperty(“ErrorDeleteCountAllowed“, 1) #Set the DeleteCountAllow policy for 1 crawl
$SearchApplication.SetProperty(“ErrorDeleteIntervalAllowed”, 1) #Set the DeleteIntervalAllowed policy to 1 hr.
SearchApplication.SetProperty(“ErrorDeleteAllowed“, 1) #Set the DeleteAllowed policy for 1 crawl
$SearchApplication.SetProperty(“ErrorIntervalAllowed“, 1) #Set the IntervalAllowed policy to 1 hr.

After setting the deletion policy I fired off a new Incremental crawl.  During this crawl the items marked for deletion in my Index were deleted.  After the incremental crawl completed I fired off one more Incremental crawl, and this time my Incremental crawl completely within a few minutes (3 mins) with only 2 errors.

============================================================================================================

 

 

Fixing: One or more field types are not installed properly. Go to List settings page to delete these fields.

January 28, 2013 5 comments

Recently I ran into this error “One or more field types are not installed properly. Go to List settings page to delete these fields” when trying to access one of the migrated sites after having migrated from SharePoint 2007 to SharePoint 2010 doing a direct DB attachment (upgrade). 

This sometimes happens with a direct db attachment upgrade, and the SharePoint Server Publishing Infrastructure conflict with the internal list called Relationship List.  When the SharePoint Server Publishing Infrastructure is activated it creates this hidden list automatically.  This list is hidden and can be accessed by going to this URL: http://portalsite/relationships%20list/allitems.aspx.  What happens here since the SharePoint Server Publishing Infrastruture feature is already activated in the SP2007 enviornment and then moved over to SP2010 via DB attachment there is a conflict with a column in that list called GroupGuid.  In SharePoint 2007 the column name is GroupId and is type text, and is changed in SharePoint 2010 to GroupGuid with the same type text.  For whatever reason this column type is not updated when migrated over.  This column type needs to be set at GUID not text.  Since there is no way within SharePoint to create a column with a GUID type the best way to fix the problem is to delete this list and have it automatically re-created.  To successfully delete this list follow these steps.

1.  Go to Site Actions, Site Settings and Site Collection features under Site Collection Administration.
2.  Deactivate the SharePoint Server Publishing Infrastructure feature
3.  Go back to http://portalsite/relationships%20list/allitems.aspx
4.  Go to List, List Settings and Delete this list (If the SharePoint Server Publishing Infrastructure feature is not deactivated this option will not be visable)
5.  After deleting the list go back to Site Actions, Site Settings, and Site Collection features
6.  Activate SharePoint Server Publishing Infrastructure feature (Note: If you recieve this error:

Column Limit Exceeded.

There are too many columns of the specified data type. Please delete some other columns first. Note that some column types like numbers and currency use the same data type.

 when trying to reactivate this feature read this blog  posting –http://jshidell.com/2013/01/28/fixing-cannot-activate-sharepoint-2010-publishing-infrastructure-feature-column-limit-exceeded/ )

7.  The SharePoint Server Publishing Infrastructure feature will automatically recreate the list.

Go back to http://portalsite/relationship%20list/allitems.aspx and varify that the column GroupGuid now has a column of type GUID instead of text.

Now go back to the site you were trying to access and you should no loger get that error message “One or more field types are not installed properly. Go to List settings page to delete these fields”

Fixing: Cannot Activate SharePoint 2010 Publishing Infrastructure Feature – Column Limit Exceeded

January 28, 2013 6 comments

I ran into this problem: 

Column Limit Exceeded.

There are too many columns of the specified data type. Please delete some other columns first. Note that some column types like numbers and currency use the same data type.

This happened after migrating from SharePoint 2007 to SharePoint 2010 using the database attach (upgrade) approach and trying to activate the SharePoint Server Publishing Infrastructure feature.

There is a hidden list in SharePoint 2010 called Quick Deploy Items which is created when you activate the SharePoint Server Publishing Infrastructure feature.  You can access it through this URL:  http://portalsite/quick%20deploy%20items  If this list already exists prior to you trying to Activate this feature it will throw the error above.  For some strange reason there are 3 columns in this list that have been duplicated about 4 different times (JobId, ItemUrl, and ItemType) which causes the list to exceed its column limit.

To fix the problem you will have to delete the duplicate columns.  You could write a powershell script to iterate through the list columns and delete them or you can go into each column separately and delete them.  That is what I did.  

After deleting the extra columns go back to try and activate the SharePoint Server Publishing Infrastructure feature again.  This time you should be successful.

If you get another error try to active the feature using stsadm command with Farm Admin privilages (stsadm -o activatefeature -name PublishingSite -url http://portalsite)

I tried to active the feature using STSADM and I still recieved an error, but a different one this time.  This time the error was:

Provising did not succeed.  Details:  Failed to provision role definitions.  OrginalException:  Value does not fall within the expected range

To overcome this error just simply run the same STSADM command again, but his time with the –force switch.

Hopefully this helps thoughs that have been having problems.

 

Mirror Staging SharePoint 2010 Environment with SharePoint 2010 Production Environment by Moving Over Content Databases using PowerShell

January 22, 2013 2 comments

To those that maintain both a Production and a Staging SharePoint 2010 environment know the importance of trying to have both environments balanced (mirrored) to allow for a “true” production type environment in a staging environment.  This allows for “true” testing of new services/deployments/solutions, patching before  deploying to production.

Of course you can implement 3rd party solutions (Metalogix, Doc Ave, Syntergy) to allow for a more scheduled (automation) technique to maintain your staging enviornment to reflect production, if you choose to go that direction that is fine.  However I will be blogging on how you can accomplish the same thing with the use of PowerShell.

This will require four separate PowerShell scripts.

These scripts will *assume* that your SharePoint 2010 environment is using SQL Server 2008 (or) R2 and that your SharePoint 2010 production SQL databases have a scheduled backup job.

This also *assumes* that both your Production and Staging environment are separate and have their own farms and have different SQL Servers, but are configured exactly the same.

The 1st script will  copy the latest FULL Production SharePoint 2010 Content Databases backups,  and move these backup files to a destination on the SharePoint 2010 Staging SQL Server.

copy-dbbackups.ps1
__________________________________________________________________________________________________________________

1.  Log into your SharePoint 2010 Production SQL Server.

2.  Copy script below, and change the parameters in Red that correspond to your enviornment, and paste into notepad and save file as copy-dbbackups.ps1
3.  Open up SQL Server Managment Studio (SSMS) on your Production SQL server
4.  Expand SQL Server Agent
5.  Expand Jobs
6.  Right click Jobs and select “New Job”
7.  Under the General tab Give Job a name i.e. (“copy dbbackups”)
8.  Under Steps tab click “New..” button
9.  Give step name i.e. (“copy dbbackups”)
10.  Under Type: select “PowerShell” from dropdown
11.   Under Run As:  Select “SQL Server Agent Service Account”
12.  Under Command:  Click Open… button and browse to the location where you saved copy-dbbackups.ps1 
13.  Under Schedules Tab click the “New..” button
14.  Give Schedule Name:  i.e. (copy dbbackups)
15.  Schedule your job sometime after your FULL backups.  i.e. (If your SQL full backups occur on a Sunday night, schedule your job to happen sometime on Monday after all backups have been completed).
16.  Save your job.
17 .  When this job runs, this may take a long time depending on your database backup file sizes.  My suggestion is to schedule this for overnight in a time where there is less network traffic.

====================================================================

#Remove old backup files on the SQL Server Staging server
get-childitem
\\staging SQL server\m`$\restore\prod2stg | remove-item -force

#Grab backup files from backup location on Production SQL Server
$files = get-childitem
\\backup location\backup\sql_backups\SP2010\FULL | where {$_.FullName -like “*SPContent*_backup_*”  -and $_.CreationTime -ge (get-date).adddays(-4)}

#Copy backup files from backup location on Proction SQL Server to a location on the Staging SQL Server
$files | copy-item -dest
\\staging SQL server\m`$\restore\prod2stg

#Remove databases not needed i.e. Admin and Mysites from Staging SQL Server
get-childitem
\\staging SQL Server\m`$\restore\prod2stg | { if $_.Name -like “*_Admin_*” or $_.Name -like “*_MySites_*“) { $_.Delete() } }

=====================================================================

exec-query.ps1
restore-dbbackups.ps1

This will require two scripts.  1 script will execute a SQL query to establish SQL connection strings and create the datasets, the second script will actually restore the backup databases into SQL.

1.  Open up notepad and copy the following script below and save as exec-query.ps1.
Nothing should need to be changed in this script, just copy as is. 

exec-query.ps1

=====================================================================

function exec-query($sql,$parameters=@{},$conn,$timeout=0,[switch]$help)
{
if($help)
{

$msg = @” Execute a sql statement. Parameters are allowed.  Input parameters should be a dictionary of parameter names and values.  Return value will usually be a list of datarows.”@

Write-Host $msg
return

}
$cmd=new-object system.Data.SqlClient.SqlCommand($sql, $conn)
$cmd.CommandTimeOut = $timeout
foreach($p in $parameters.Keys)
{
[void] $cmd.Parameters.AddWithValue(“@$p”, $parameters[$p])
}

$ds=New-Object  System.Data.DataSet
$da=New-Object System.Data.SqlClient.SqlDataAdapter($cmd)
$da.fill($ds) | Out-Null

return $ds

}
========================================================================

restore-dbbackups.ps1

This script will call the exec-query script, and then restore the backup of the content databases moved over from your Production SQL environment into the Staging SQL server.

Copy the script below and change the parameters in Red with your environment information.  Save file as restore-dbbackups.ps1

1.  Open up SQL Server Managment Studio (SSMS) on your Staging SQL Server
2.  Expand SQL Server Agent
3.  Expand Jobs
4.  Right click Jobs and select “New Job”
5.  Under the General tab Give Job a name i.e. (“restore dbbackups”)
6.  Under Steps tab click “New..” button
7.  Give step name i.e. (“restore dbbackups”)
8.  Under Type: select “PowerShell” from dropdown
9.   Under Run As:  Select “SQL Server Agent Service Account”
10.  Under Command:  Click Open… button and browse to the location where you saved restore-dbbackups.ps1 
11.  Under Schedules Tab click the “New..” button
12.  Give Schedule Name:  i.e. (restore dbbackups)
13.  Schedule your job sometime after your FULL backups from Production have been moved over to your Staging environment
14.  Save your job.
15.  When this job runs, this may take a long time depending on your database backup file sizes.  My suggestion is to schedule this for overnight in a time where there is less network traffic.  If this is staging are you are not concerned about network traffic schedule it when you like

=========================================================================

#Load the exec-query.ps1 script
. M:\restore\exec-query.ps1

#Grab the database backup files that were moved over from your Production SQL environment.
$backups = get-childitem M:\restore\prod2stg

#Loop through the backup files and trim/truncate the database name by removing unwanted characters in the backup dbname file name.  For example my backup database files names were in this format (SPContent_Site1_backup_2013_02_22_200009_6652162.bak).  I wanted to remove every character after _backup*.  You will have to play with the parameters here a little to get this to work for your database name by messing with the substring parameters ($_.name.substring(0,$_.name.lastindexof(“_”)-13), $dbname.substring(0,$dbname.lastindexof(“_”), and $dbname.trimend(“_backup”)).

$backups | % {

$dbname = $_.name.substring(0,$_.name.lastindexof(“_”)-13)  ; $dbname = $dbname.substring(0,$dbname.lastindexof(“_“)); $dbname = $dbname.trimend(“_backup“)

#Restore backup database files to Staging SQL Server
restore-SQLdatabase -SQLServer “SQL SERVER” -SQLDatabase $dbname -Path $($_.FullName) -TrustedConnection $true

#Establish a SQL connection
$conn = new-object data.sqlclient.sqlconnection “Server=SQL SERVER;Integrated Security=true”

$conn.open()

#Set database files to Simply Recovery and execute the exec-query.ps1 script
$sqlcmd1 = “ALTER DATABASE [$dbname] SET RECOVERY SIMPLY WITH NO_WAIT”
exec-query $sqlcmd1 -conn $conn

}

=============================================================================

This last script is not really necessary but I did run into some problems after doing restores where SharePoint did not recognize the databases in the farm.  To overcome this I just dismounted and the remounted the databases in my staging environment by running this script.

Copy script below and change the parameters in “Red” to reflect your environment.  Save file as dismount-mount-dbs.ps1

If you use Task Scheduler on your SharePoint Server (CA).  Then just add this script as a scheduled task on that server.   Schedule it to run after all databases have been restored to your Staging SQL environment.

dismount-mount-dbs.ps1

==============================================================================

#Get databases currently mounted in your SharePoint 2010 staging farm
$databaseName = Get-SPDatabase

#foreach to loop through the content databases in your staging farm dismount them, and then re-mount them.
foreach($database in $databaseName)
{
if ($database -like “*SPContent_*)
{
Write-Host “Dismounting database ” $database.Name -foreground “green”
Dismount-SPContentDatabase -Identity $database.Name -confirm:$false
Start-Sleep -s 10
Write-Host “Mounting database ” $database.Name -foreground “green”
Mount-SPContentDatabase -name $database.Name -DatabaseServer STAGING SQL SERVER -WebApplication STAGING WEB APPLICATION URL -confirm:$false
Start-Sleep -s 10
}
}
=================================================================================