CRM Organization Backup Running Too Long RRS feed

  • Question

  • The backup of our production CRM organization is taking a long time (40 minutes for 38GB). This also happens for separate (but similar) organizations in our QA environment. The environments are on completely separate servers (DB, IIS, APP).  Both SQL Servers are Azure full VMs. A corresponding non-CRM database of about 2/3 the size of CRM can be backed up in 2 minutes. Both databases began at the same time and increase in size (autogrow) in the same proportion (since the non-CRM data feeds the CRM data). I can find no information or similar issues to this on a web search. Has anyone had this issue and, if so, how was it resolved? I ran a backup of CRM and had the Activity Monitor on while it was running. The only thing out of the ordinary that I noticed was that the Database I/O statistic hovered between 8 and 12 mb/sec, which I believe is slow. In the middle of the backup, I started a second backup for the non-CRM database. The Database I/O statistic jumped to 39 mb/sec while that backup was running, but dropped back to 8-12 after it finished. It also takes 40 minutes whether I run it from a terminal server in SSMS, directly on the DB server in SSMS, via SQL Agent and a Maintenance Plan, and with and without compression. I don't believe SQL server BACKUP should care (or know) that it is a CRM database vs a non-CRM database. But as I said, the same thing happens in two completely separate server environments. I rebuilt all of the indexes to reduce index fragmentation. That had no effect. I researched “data file fragmentation” and have not come up with a query to determine how much there is for a particular database. But in my search, the general consensus is that it is not something to worry about, is an issue that must be corrected in the file system (not SQL), and is a last resort for DBAs when reviewing performance issues. We have many other databases (non-CRM) on these servers and all of them back up very quickly, so I don't think it is a file system issue. Thank you for any help you can provide.
    Wednesday, December 23, 2015 5:43 PM

All replies

  • I don't think 40 minutes for a 40 GB database to complete a backup is a terribly long time.  Since this is Azure, there might be some optimization there that might help but I'm not sure.  I've heard of SQL slowness related to running CRM when the Azure machine running the SQL DB did not sufficient resources.  I would take a look at recommended specs and ensure you aren't lacking.  If your specs meet recommendations, then you should be ok.

    Here's a few links that might help.  If you continue to have concerns, you can open a ticket with Microsoft support and see if they have different recommendations.  Go with recommended rather than minimum specs.



    Regards, Donna

    Monday, December 28, 2015 3:44 PM
  • That is extremely long in my experience. As I said in my question, a 30GB non-CRM DB backup takes only 2 minutes, on the same server/network. Every other non-CRM DB on this server runs just as quickly with proportional sizes. The same situation applies on a completely different Azure server (our QA server), CRM takes 40 minutes and all other non-CRM ones take 2 minutes. I don't know how SQL Server backup works internally, but I don't think it is aware of or cares that the database contains CRM versus any other application. I'm curious to know if anyone/everyone is experiencing the same behavior (on Azure) and what they may have done to "correct" it.
    Friday, January 1, 2016 10:07 PM
  • Hi,

    What is the recovery model that you are selecting? Full or Simple?

    Also, just go to the SQL data folder and check what is the log file size. (.ldf)

    you are right, SQL doesn't differentiate the database based on application but if transaction log are heavy then full mode will take time.



    Sunday, January 3, 2016 11:50 AM
  • The difference in times (CRM DB v. non-CRM DB) sound excessive to me. If the environment is the same, the database-specific factors that can affect SQL backup times are:

    1. Available memory for buffers. To backup a database, SQL needs to read pages into memory, so it can then write the pages out. Assuming the same SQL instance (as overall memory available to SQL Server is per instance), then you could get variation if the database is in heavy use, as SQL will also need to use memory to manage ongoing requests. A very heavily fragmented (at the page level) database can also suffer here, but it's very unlikely that a CRM database is fragmented in that way (in CRM you only really get this if you have a lot of record deletion)
    2. Ongoing transactions. To manage data changes that occur whilst a backup is running, SQL backup will also append a portion of the transaction log to the backup to that changes can be replayed. This would have some affect on the backup time, but I can't imagine it would account for all the differences
    3. Full-text. Full text catalogs are included in the backup, but you may not have accounted for them in the database size. If the full-text catalogs are huge, then this may account for the difference, but again this is unlikely

    To me, point 1 is the most likely, and a lack of memory would also explain why the I/O rates are higher when backing up the non-CRM DB. Things I'd check are:

    • Are the CRM databases on the same SQL instance as the non-CRM ones ? If they are different, check the respective memory allocations
    • As you have similar performance in the QA environment, try disabling any major processes that access CRM (e.g. the CRM Async service, Email router/Server sync, any data syncs), then run the backup to see how much difference this makes
    • SQL Server has a couple of configuration options (BUFFERCOUNT and MAXTRANSFERSIZE) which affect the buffer usage during backups, and you could try changing these. However, I doubt you'll get huge benefits, unless you're really short on memory, and any improvement in backup speed may be at the cost of ongoing responsiveness

    Microsoft CRM MVP - http://mscrmuk.blogspot.com/ http://www.excitation.co.uk

    Monday, January 4, 2016 12:38 PM
  • Also, don't forget to take a look at your async table and see if there is some optimization you can perform.  This article was written for to 4 and 2011 but still applies if the async table or workflows have not been optimized.  You can run the scripts that query and provide data regarding the state of the table to determine if running the scripts will be of value.


    Regards, Donna

    Monday, January 4, 2016 1:33 PM