locked
Sql Server 2019 Standard 64bit Slow Performance Vs Sql Server 2012 Express (Sp1) High Performance RRS feed

  • Pergunta

  • Hello, 

    I try to migrate my database from Sql Server 2012 Express (Sp1) to Sql Server 2019 Standard 64bit 

    and I getting very bad query performance results .

    Same DB , Same Table, Same Data , same query ..

    with different querys execution time results.... 

    Old Server - 9 sec 

    New Server - 35 sec

    total 37,940 rows


    Old Server 

    SERVER 2012 STANDARD 64 bit

    - VM with 50 giga ram + Xeon E5 2407 2.2G  5 cores + 15000rpm SAS DRIVE

    - Microsoft SQL Server 2012 (SP1) - 11.0.3000.0 (X64)  Express Edition (64-bit) on Windows SERVER 2012  <X64> (Build 9200: ) (Hypervisor)

    New Server

    - Server 2019 STANDARD 64 bit

    VM  Hyper with 50 giga ram + xeon E5 2609 V4 1.7G  8 cores + SSD -   much stronger 

    Microsoft SQL Server 2019 (RTM-CU6) (KB4563110) - 15.0.4043.16 (X64)  Standard Edition (64-bit) on Windows Server 2019 Standard 15.0 <X64> (Build 14393: ) (Hypervisor) 

    SQL Server 2019 64bit standard supposed to be the most important part of the performance improvement in my situation  

    but No.....

    My experience and my results

    First

    Re-build all the indexes

    No results...

    ______________________

    Make the Changes with the Compatibility level   2008 / 2012 / 2019

    No results...

    ______________________


    try using Scalar UDF Inlining   - 

    ALTER DATABASE [WiseERP] SET COMPATIBILITY_LEVEL = 150;
    ALTER DATABASE SCOPED CONFIGURATION SET TSQL_SCALAR_UDF_INLINING = ON;
    No results...

    ______________________

    Try

    SELECT 'EXECUTE sys.sp_refreshsqlmodule ''[' + OBJECT_SCHEMA_NAME(object_id) + +'].' + '[' + OBJECT_NAME(object_id)
           + ']'';' AS 'RefreshStatement'
    FROM sys.sql_modules
    WHERE is_inlineable = 1
          AND inline_type = 1;
    GO

     that effect only on SP ...  

    No results....  

    __________________________________

    Install the latest CU5 SQL 2019 update 

    No results....  

    ___________________________________

    DBCC TRACEON (4199,-1);

    No results....  

    ___________________________

    LEGACY_CARDINALITY_ESTIMATION=ON

    LEGACY_CARDINALITY_ESTIMATION=OFF

    No results....  

    _________________________________________

    exec sp_updatestats

    No results....  

    _____________________________________

    All my querys running with slow Performance on my new Sql Server 2019 Standard 64bit !!!

    Any Ideas ? 











    • Editado Naomi N quarta-feira, 24 de junho de 2020 20:32 typo
    • Editado EREZra quarta-feira, 19 de agosto de 2020 19:25
    quarta-feira, 24 de junho de 2020 18:51

Todas as Respostas

  • You would need to look at the query plans for the old and new server to determine the differences.

    The main difference between 2012 and 2019 is the changes to the query plan estimator.  There are times where the queries are slower.  You could try "LEGACY_CARDINALITY_ESTIMATION=ON" and see what happens.

    https://docs.microsoft.com/en-us/sql/relational-databases/performance/cardinality-estimation-sql-server?view=sql-server-ver15

    quarta-feira, 24 de junho de 2020 19:44
  • Hi, 

    I didn`t find any changes in the query plan (only the DB disk location)  

    LEGACY_CARDINALITY_ESTIMATION=ON and LEGACY_CARDINALITY_ESTIMATION=OFF

    No results....  

    Thanks 


    • Editado EREZra quarta-feira, 24 de junho de 2020 19:51
    quarta-feira, 24 de junho de 2020 19:51
  • Hi EREZra,

    You can try to update statistics for the entire db:

    exec sp_updatestats

    quarta-feira, 24 de junho de 2020 19:52
  • Hi, 

    exec sp_updatestats

    No results....  

    Thanks !!

    quarta-feira, 24 de junho de 2020 19:57
  • Then I would start looking hard at the VM.

    If there are no differences in the query plan, then the only difference is the VM.

    quarta-feira, 24 de junho de 2020 19:57
  • Hi, 

    I am using HyperV, I don`t know any special configuration that can help here

    It is very simple configuration . 50G RAM \ 16 cores CPU 

    quarta-feira, 24 de junho de 2020 20:03
  • Hi EREZra,

    Let's compare hard drives latency on both servers.

    Please run the following query on both servers and share the outcome.

    SQL:

    SELECT  LEFT(physical_name, 1) AS drive ,
            CAST(SUM(io_stall_read_ms) / ( 1.0 + SUM(num_of_reads) ) AS NUMERIC(10,
                                                                  1)) AS 'avg_read_disk_latency_ms' ,
            CAST(SUM(io_stall_write_ms) / ( 1.0 + SUM(num_of_writes) ) AS NUMERIC(10,
                                                                  1)) AS 'avg_write_disk_latency_ms' ,
            CAST(( SUM(io_stall) ) / ( 1.0 + SUM(num_of_reads + num_of_writes) ) AS NUMERIC(10,
                                                                  1)) AS 'avg_disk_latency_ms'
    FROM    sys.dm_io_virtual_file_stats(NULL, NULL) AS divfs
            JOIN sys.master_files AS mf ON mf.database_id = divfs.database_id
                                           AND mf.file_id = divfs.file_id
    GROUP BY LEFT(physical_name, 1)
    ORDER BY avg_disk_latency_ms DESC;


    quarta-feira, 24 de junho de 2020 20:08
  • Hi

    New Server results

    drive avg_read_disk_latency_ms avg_write_disk_latency_ms avg_disk_latency_ms
    D 9.4                                         1.4                                              9.0

    Old Server results

    D 9.9 76.9 46.4

    quarta-feira, 24 de junho de 2020 20:15
  • Here is my machine numbers

    Database Server Drives Latency
    drive avg_read_disk_latency_ms avg_write_disk_latency_ms avg_disk_latency_ms
    C 5.6 1.5 3.9
    quarta-feira, 24 de junho de 2020 20:39
  • As others have said, we need to see the query and the actual execution plans for the old server and the server to be able to help you. With what you have told us know, you can only get shots in the dark - as you may have noticed.

    You can upload executions plans on http://www.brentozar.com/pastetheplan


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

    • Sugerido como Resposta Lily Lii segunda-feira, 29 de junho de 2020 01:29
    quarta-feira, 24 de junho de 2020 21:59
  • Hi EREZra,

    You could change the compatibility level to 110 and change the database scoped setting "Query Optimizer Fixes" to OFF to see if it is the query issue.

    And to determine whether there is problem in storage performance or not, you can check ERRORLOG file. You will get slow I/O message in your Errorlog file, if the issue is related to storage I/O subsytem.

    Also, Slow Network, Small Buffer Pool, Wrong technology used are some of the reasons that degrades SQL Server performance. 

    refer to: SQL Server database slowness troubleshooting.

    TIPS TO IMPROVE PERFORMANCE OF SLOW RUNNING SQL SERVER.

    Best Regards,

    Lily


    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com

    quinta-feira, 25 de junho de 2020 06:31
  • Hi, 

    Still can`t find solution.

    No error in the Errorlog.

    compatibility level to 110 - No results....

    change the database scoped setting "Query Optimizer Fixes" to OFF    -  No results....

    I made a storage upgrade to SSD RAID1.  (FROM SAS 15K RAID5) 

    The new server avg_disk_latency 0.8ms

    And 

    I can not believe to my eyes .... 

    still same results..... 

    Old (old hardware) Server with SAS15k-  9 sec 

    New (New hardware)  Server with SSD  -  35 sec

    total 37,940 rows

    It`s really really strange scenario 

    Is there anyone in the audience with an idea ? 


    Erez








    • Editado EREZra domingo, 16 de agosto de 2020 17:51
    domingo, 16 de agosto de 2020 17:14
  • as others have said, without the execution plan, everything else is largely a waste of time

    jchang

    domingo, 16 de agosto de 2020 19:42
  • Hi,

    Executions plans

    Old Server 

    https://www.brentozar.com/pastetheplan/?id=Sk8qIHYGv

    10sec  

    New Server  

    https://www.brentozar.com/pastetheplan/?id=BypaDBKfw

    44sec


    Erez

    terça-feira, 18 de agosto de 2020 12:22
  • There are subtle differences in the plans, don't worry about the difference in plan cost as the formulas for this are so old as to be of little value.

    In speculating as to why there is a difference between old and new, on the hardware and OS side, there are two items I would want to know. 

    1) is Lock Pages in Memory in affect for one system and not the other?

    2) does one system have the Meltdown/Spectre mitigations patches and not the other?

    also

    3)  provide the actual plan if you can (Include Actual Execution Plan, or Ctrl+M, then run the query). The difference between estimated and actual rows is  very important

    the plan is large enough that I cannot see it in its entirety on my 43 in 4K screen (you should ask you company to get you two of these if they expect you to find the issue)

    4) please make two index changes below. I am not expecting a big improvement, but it will reduce the number of operations in the plan so I can see it better

    CREATE INDEX IX_HelpTableLines_2 ON dbo.HelpTableLines (FieldIndex,HelpTableCode) 
    WITH(DROP_EXISTING = ON)
    CREATE INDEX IX_HelpTableLines_9 ON dbo.HelpTableLines (HelpTableCode,FieldIndex) 
    WITH(DROP_EXISTING = ON)

    to restore the original, run the above  without the second column in the index key. Also if the table HelpTableLines is to too large, and the FieldName column is not too fat, put it in the Include part of one of the indexes Ex : INCLUDE(FieldName)

    5) provide the SQL for the view View_ServiceLinesManager, as that may provide some insight

    6) there are some type conversions , if you could look into whether any of the columns have non-integer values

    (CONVERT_IMPLICIT(int,[ERP].[dbo].[OrderHistoryErp].[OrderHistoryErpDeliveryCode],0)) 
    ([ERP].[dbo].[HelpTableLines].[FieldIndex]=CONVERT_IMPLICIT(int,[ERP].[dbo].[HelpTableLines].[FieldExtraDetail2],0)) 
    (CONVERT_IMPLICIT(int,[ERP].[dbo].[HelpTableLines].[FieldExtraDetail2],0)=[ERP].[dbo].[WorkOrderRows].[WorkOrderRowDiagnosisIndex]) 



    jchang

    terça-feira, 18 de agosto de 2020 13:42
  • one more: check the true operating frequency of the processor, ex Task Manager, Performance, CPU. The processor label at the top right, and the bottom left Speed is approximately close. and that processor frequency is not bouncing. Also , in Control Panel, Power Options : is high performance set? the key is to not have the CPU drop into power save mode while the query is running

    jchang

    terça-feira, 18 de agosto de 2020 13:49
  • the only thing in the new server execution plan that I can see as causing problems is at the very bottom, about center. there is a sub-source (my unofficial term) involving Customers, CompaniesRegionsCities and a couple of HelpTableLines. This has a scan of Customers 101825 rows, going into a Nested Loops, with a sub-expression of Table Spool, more Loops and Sort.

    In the old server plan, this sub source is at the top. Note here the Scan feeding later into Merge with innser source having the  Loop join, but no the dangerous Spool

    If you can trace this to the source of the problem, the file a complaint to Microsoft why they think the new server has a  good plan. Microsoft is very proud of their great and powerful Intelligent Query Optimizer in 2019 (and it is, but very smart still messed up on occasions). Here, I don't think the Table Spool is a good idea.

    I notice you have MaxDOP set to 1 in the new server (old is forced by Edition). Try MaxDOP = 2, 

    also, these indexes might push the plan to not do this

    CREATE INDEX IX_RegionIndex ON dbo.CompaniesRegionsCities(RegionIndex,CityIndex)

    CREATE INDEX IX_HelpTableLines_9 ON dbo.HelpTableLines (HelpTableCode,FieldIndex) INCLUDE(FieldName)
    WITH(DROP_EXISTING = ON)


    jchang


    • Editado jchang61 terça-feira, 18 de agosto de 2020 14:45
    terça-feira, 18 de agosto de 2020 14:44
  • Hi jchang61

    THANKS

    Lock Pages in Memory   (enable only on the vm of the new server)  - no effects 

    _  Meltdown/Spectre - the old server no . the new as the  all latest updates

     Control Panel, Power Options : is high performance set?    YES  -  no effects 

    _

    MaxDOP in the new server now 2 .  -  no effects 

    _

    CREATE INDEX IX_RegionIndex ON dbo.CompaniesRegionsCities(RegionIndex,CityIndex)

    CREATE INDEX IX_HelpTableLines_9 ON dbo.HelpTableLines (HelpTableCode,FieldIndex) INCLUDE(FieldName)
    WITH(DROP_EXISTING = ON)

    No effects 

    _

    Do you think It`s something with the new sql server 2019 Query optimization?  and not something with  the Hyper-V VM ? 


    Erez

    terça-feira, 18 de agosto de 2020 15:33
  • there is a difference in the execution plan that I might be concerned about. See if you can generate the Actual Execution Plan on the new system. That is more likely to be the source of the problem

    there is a possibility with VM on a multi-socket system, in which the thread running your query is on a core in one node and the memory allocated for your query is on the other node

    look into sys.dm_os_memory_node_access_stats , you will need Trace flag 842, but check because it been a while. This should not be a likely event, so I would look into the actual execution plan for the new server first


    jchang

    terça-feira, 18 de agosto de 2020 15:48
  • Hi,

    Actual Execution Plan

    https://www.brentozar.com/pastetheplan/?id=ByuQqOFzD


    Erez

    terça-feira, 18 de agosto de 2020 16:02
  • now I am certain this problem is in this part of the plan

    if it was me, I would not use the view. Identify this section of the query, Insert to rows into a temp table, and have the main query join to the temp table instead of a sub-query.

    but you are probably looking for the good execution plan?

    notice that of the 101825 rows in Customers, it must touch 58751871 rows before yielding 25927 rows. If your company will pay for it, file an issue with Microsoft on how the Query Optimizer came up with a poor plan when 2012 had a good plan. This is not a bug but rather a deficiency, and MS might not credit your account.

    Right now, without the view definition, I do not know why the Join from Customers to HelpTableLines on FieldIndex is done in such an awkward manner involving the spool


    jchang

    terça-feira, 18 de agosto de 2020 16:49
  • for any one interested, this the equivalent portion of the plan on the old system

    it would be really nice if we could find the sub-expression for these tables & joins

    then query with a join on [Customers].CustomerIndex) = ([ServiceLines].serviceLineEntityIndex to see if the spool operator is there 


    jchang

    terça-feira, 18 de agosto de 2020 17:28
  • The View [ERP].[dbo].[View_ServiceLinesManager] is my worst case...
    In general, the new server works in poor performance vs the old server

    that is the main problem

    simple example 

    select * from  ServiceLines table  (47,000 rows)

    Elapsed time 
    00:00:01.154 sec   - old server  (very old server  with old sas drive raid 5)

    00:00:02.453 sec - new server (  strong server  with new dell SSD drive raid 1 ) 

    Global deep performance problem 



    Erez

    terça-feira, 18 de agosto de 2020 19:01
  • Quick check - are you running with results to grid in SSMS? If you change results to text, how is the performance?

    Looking for new opportunities

    For every expert, there is an equal and opposite expert. - Becker's Law


    My blog


    My TechNet articles

    terça-feira, 18 de agosto de 2020 19:08
  • Hi

    results to grid (old server) 00:00:01.154 sec sec

    results to grid (new server)  00:00:02.453 sec

    results to text 00:00:28.265 (new server)   - super slow...


    Erez

    • Sugerido como Resposta Naomi N terça-feira, 18 de agosto de 2020 19:16
    • Não Sugerido como Resposta Naomi N terça-feira, 18 de agosto de 2020 19:16
    terça-feira, 18 de agosto de 2020 19:13
  • Hmm, this doesn't make much sense (I was thinking to eliminate the visual part of returning the result by SSMS). Something is indeed wrong - hopefully other experts will help to pinpoint what exactly.

    Looking for new opportunities

    For every expert, there is an equal and opposite expert. - Becker's Law


    My blog


    My TechNet articles

    terça-feira, 18 de agosto de 2020 19:17
  • Are you sure that the results to text need to be much faster then results to grid  ? 

    you can also see the results from the old server  

    results to grid (old server) 00:00:01.154 sec 

    results to text (old server) 00:00:17.163 

    results to grid (new server)  00:00:02.453 sec

    results to text 00:00:28.265 (new server)   - super slow...


    Erez



    • Editado EREZra terça-feira, 18 de agosto de 2020 19:36
    terça-feira, 18 de agosto de 2020 19:26
  • I think results to text are supposed to be quicker. You can also try to discard results after execution (menu Query / Query options /results) and try running with discard results to see what kind of performance you get.

    Looking for new opportunities

    For every expert, there is an equal and opposite expert. - Becker's Law


    My blog


    My TechNet articles

    terça-feira, 18 de agosto de 2020 19:37
  • discard results (old server)  00:00:00.109 sec 

    discard results(new server) 00:00:00.187 sec

    It`s always more at my new server 




    Erez

    terça-feira, 18 de agosto de 2020 19:45
  • your old server is 2.2GHz, new 1.7GHz (though you should verify the actual operating frequency) so single thread ops should be a little slower. I would focus on SQL Server reported worker times (CPU), not elapsed. 


    jchang

    terça-feira, 18 de agosto de 2020 21:24
  • And what if you run

    DECLARE @d datetime2 = sysdatetime()
    SELECT * INTO #temp FROM ServiceLines
    SELECT daetdiff(ms, @d, sysdatetime())

    on both servers?

    It may be a good idea to run it multiple times to get an average.


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

    terça-feira, 18 de agosto de 2020 21:26
  • also do SELECT * FROM sys.dm_os_memory_node_access_stats , you will need Trace flag 842 to prove you are comparing the same memory node access (local, or remote)

    jchang

    terça-feira, 18 de agosto de 2020 23:53
  • And what if you run

    DECLARE @d datetime2 = sysdatetime()
    SELECT * INTO #temp FROM ServiceLines
    SELECT daetdiff(ms, @d, sysdatetime())

    on both servers?

    It may be a good idea to run it multiple times to get an average.


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

    Interesting test

    Old server 00:00:01.3

    New Server 00:00:00.250

    The new sever did it 80% faster

    What can be understood from that test?



    Erez

    quarta-feira, 19 de agosto de 2020 16:07
  • also do SELECT * FROM sys.dm_os_memory_node_access_stats , you will need Trace flag 842 to prove you are comparing the same memory node access (local, or remote)


    jchang

    New server

    NEW SERVER

    OLD SERVER

    OLD SERVER


    Erez


    • Editado EREZra quarta-feira, 19 de agosto de 2020 16:56
    quarta-feira, 19 de agosto de 2020 16:10
  • on Erland Sommarskog's test, I would say your new server is good, but the connection from where your SSMS to old and new server, the connection to the new is slower

    on the os_memory_node, the first result shows all access is local , which is good. because your cores are all on node 0, and all your memory is local (same node as cores)

    the second is single node, with no activity


    jchang

    quarta-feira, 19 de agosto de 2020 16:41
  • on Erland Sommarskog's test, I would say your new server is good, but the connection from where your SSMS to old and new server, the connection to the new is slower

    on the os_memory_node, the first result shows all access is local , which is good. because your cores are all on node 0, and all your memory is local (same node as cores)

    the second is single node, with no activity


    jchang

    It`s strange because the old server is my production

    and the New server is the test environmental (with no activity ) .

    "the connection to the new is slower"  - from where to where ?

    I did all my tests on each localhost 


    Erez



    • Editado EREZra quarta-feira, 19 de agosto de 2020 17:19
    quarta-feira, 19 de agosto de 2020 17:02
  • to all, EREZra provided a cloned database to me. On my SQL Server 2019 instance, using the cloned database, I can generate the estimated execution plan (having data distribution statistics but no data). It does not have the bad spool operator causing the inefficienciy

    the SQL statement from the earlier plans is of the form SELECT column list FROM view1.

    View1 references 15 second level views, which then references 5 more views, then 12 more views, then 1 more view. So is this nested 4 or 5 deep?

    I recall Grant Fritchey saying deeply nested views really sucks? But I am not sure if the query optimizer gives up?

    also, what is the effect of the Query Hint : EXPAND VIEWS? Did SQL Server 2019 change the handling of nested views?


    jchang

    quarta-feira, 19 de agosto de 2020 17:43
  • I meant connection from SSMS to the SQL Server, but you say you ran SSMS on the local machine. I think this means your local machine is good, and our issue is something else

    jchang

    quarta-feira, 19 de agosto de 2020 17:44
  • But what is the explanation

    for simple 

    select * from  ServiceLines    ( 47,000 rows) , table and not nested view

    Elapsed time 
    00:00:01.154 sec   - old server  

    00:00:02.453 sec - new server 



    Erez

    quarta-feira, 19 de agosto de 2020 18:46
  • Nested views do not inherently cause a problem, other than normally doing way more than necessary. 

    However, joining many tables, or the same tables over and over, causes the optimizer to become confused and create poor plans.

    I would suggest rewriting the query to be as small as possible.

    • Sugerido como Resposta Naomi N quarta-feira, 19 de agosto de 2020 19:23
    quarta-feira, 19 de agosto de 2020 18:47
  • Nested views do not inherently cause a problem, other than normally doing way more than necessary. 

    However, joining many tables, or the same tables over and over, causes the optimizer to become confused and create poor plans.

    I would suggest rewriting the query to be as small as possible.

    but....

    SQL express 2012 (on slow server) can get results in 8 sec 

    and SQL standard 2019 (no memory\cpu limits on strong server) get the same results in 38 sec 

    ???

    In addition

    results of simple tables select with 50% slowest performance

    absurd....

    I think that we need to focus on simple select ,

    before diving deep into Nested views 


    Erez






    • Editado EREZra quarta-feira, 19 de agosto de 2020 19:07
    quarta-feira, 19 de agosto de 2020 18:59
  • Took me some time to find that blog among her many blogs (only because I forgot how she named it - although the name was so catchy, I should have remembered). Just to re-enforce the problem of nested views (although I don't know if this still hold true - most likely to some extent)

    https://blogs.lessthandot.com/index.php/datamgmt/dbprogramming/t-sql-wednesday-21-t/

    Short and sweet blog, BTW - hope you enjoy it as I did by re-reading it again.


    Looking for new opportunities

    For every expert, there is an equal and opposite expert. - Becker's Law


    My blog


    My TechNet articles


    • Editado Naomi N quarta-feira, 19 de agosto de 2020 19:08
    quarta-feira, 19 de agosto de 2020 19:04
  • That is almost certainly due to the new optimizer. 

    I would contact MS Support and create a ticket, since you are apparently comfortable supplying the database in question.

    quarta-feira, 19 de agosto de 2020 19:19
  • That is almost certainly due to the new optimizer. 

    I would contact MS Support and create a ticket, since you are apparently comfortable supplying the database in question.

    It cost $999.00 for Single incident (Business hours support) 

    Erez

    quarta-feira, 19 de agosto de 2020 19:28
  • If MS determines it is an actual bug, they will refund your money.


    quarta-feira, 19 de agosto de 2020 19:44
  • the thing is this is not a bug. It is SQL Server 2019 not producing a good plan in his environment. On my system, I do not get the nasty spool operator.

    The point is MS wants SQL Server 2019 to be the great new thing that just makes your poor code run better, which is true in many cases , but not this one for some reason

    wow, $999? what happened to $250 or something


    jchang

    quarta-feira, 19 de agosto de 2020 19:56
  • Tried to Google the current incident support amount, but the only relevant link I found quickly was this one

    https://redmondmag.com/articles/2014/12/01/microsoft-ups-support-costs.aspx

    (in 2014 it was raised to ~500 per incident without any extra notice according to that link)


    Looking for new opportunities

    For every expert, there is an equal and opposite expert. - Becker's Law


    My blog


    My TechNet articles

    quarta-feira, 19 de agosto de 2020 20:16
  • At the suggestion of  jchang61

    use  

      OPTION (FORCE ORDER)

    make some improvement

    Old server with   OPTION (FORCE ORDER) 
    25 sec
    Old server without   OPTION (FORCE ORDER)   

    9 sec

    New server with  OPTION (FORCE ORDER) 
    00:00:15  
    New server without  OPTION (FORCE ORDER) 


    Erez


    • Editado EREZra quarta-feira, 19 de agosto de 2020 20:22
    quarta-feira, 19 de agosto de 2020 20:21
  • wow, I have been out of touch - was too cheap to pay the $250, then it became $500, and now $999?

    jchang

    quarta-feira, 19 de agosto de 2020 20:29
  • the SQL Server 2019 query optimizer is super smart, either maintains or improves on older versions, but as I have said in other posts, occasionally stupid wins one or two. this is why you never argue with a dummy, no mater how smart you are. It doesn't his reputation if he loses, but not other way

    jchang

    quarta-feira, 19 de agosto de 2020 20:32
  • It`s strange because the old server is my production

    and the New server is the test environmental (with no activity ) .



    "the connection to the new is slower"  - from where to where ?

    I did all my test on the localhost 

    So the bottleneck is SSMS!

    Or SSMS in combination with SQL Server. IF they are on the same VM, they compete for memory and CPU.

    Or simply that SSMS 2012 is faster than SSMS 18. (I assuming that you have these versions on the two servers.)

    For this issue, you should focus on a context that matters. That is, your production workload is not running this 47000-row query from SSMS.

    If your workload is a webapp where you have the web server on the same machine as SQL Server, and the web app runs this query, it is somewhat meaningful.

    But if the application server runs on a different box, testing with SSMS locally for a query with a large result set is not a good test. You can test from a different machine, for instance the application server.

    Then again, your real problem may not be the 47000-row simple query, but the query with the gargantuan query plan than Joe has been playing with. For that query, I'm inclined to say that it is long overdue with a re-write. In many cases, when you get performance regressions when you upgrade, it is because there is some problem in the query or with the indexes. It might have run acceptably on older versions, but that could well have been pure luck.


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

    quarta-feira, 19 de agosto de 2020 21:34
  • I recall Grant Fritchey saying deeply nested views really sucks? But I am not sure if the query optimizer gives up?

    The optimizer never sees the views (unless there is NOEXPAND), but only the result after the expansion. But it does see an awfully big query, and it may time out. (Although, I could not see this in the actual plan that Erez posted.)

    also, what is the effect of the Query Hint : EXPAND VIEWS? Did SQL Server 2019 change the handling of nested views?

    No. Books Online is not exctly clear what EXPAND VIEWS really does, but it seems to suggest that it prevents indexed views from being used directly. But since the views are expanded and rematched by default, I cannot really make sense out of it.

    Are there any indexed view in the database? And do any of these views appear in view definitions at some level?


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

    quarta-feira, 19 de agosto de 2020 21:45
  • Hi Erland Sommarskog,

    Thanks for your reply

    on both OLD\NEW SERVER , (separate psychical server \ separate  VM)

    I am using SSMS + SQL Server +web server on the same VM.

     SSMS 2012 is faster than SSMS 18!   (It`s takes 20 sec only to load the ssms 18 vs ssms 2012 that takes 5sec)

    My original scenario. 

    First, I start make some test over my network from my desktop to my web server on my new server . 

    I notice that everything (Including execution of simple select from table) going a little bit slow, compared to my old server . 

    I finally got to the page that load that specific nested view   cost me 37 sec of waiting...  

    My first tests was from a different box... 


    For that reason I connected directly to the VM and started to investigate my queries.

    I realized that everything works slower then my old server 

     SSMS + sql server +queries + web app 


    "it is somewhat meaningful. "    47,000 rows It`s only example to show that it is not only a matter of slowness nested views 

    The assumption was that a server with

    latest OS +  latest SQL server + latest hardware    +++   8-year technological improvement

    would perform the same operations at least 2 times faster results 

    and not 2 times slower results ...

    It does not matter if you made select * with 47,000 or 1,000,000 rows from DB......


     It shouldn't be like that !  

         



    Erez




    • Editado EREZra quinta-feira, 20 de agosto de 2020 06:57
    quinta-feira, 20 de agosto de 2020 05:30
  • "it is somewhat meaningful. "    47,000 rows It`s only example to show that it is not only a matter of slowness nested views 

    The assumption was that a server with

    latest OS +  latest SQL server + latest hardware    +++   8-year technological improvement

    would perform the same operations at least 2 times faster results 

    and not *2 times slower results* ... when selecting  47,000 or 1,000,000 rows from DB......

     It shouldn't be like that !  

         

    Maybe, but it is very important to understand that performance from a modenrn RDBMS is not entirely deterministic. The optimizer works with statistics which typically has been sampled from your data and from this the optimizer estimates which is the best plan.

    The optimizer is improved with every release, but it still has a long way to go before it is perfect. For instance, it is nowhere close to evaluate all possible plans. For a query of that complexity, it could probably take days or even months. And the plan would still be based on estimates.

    And while the optimizer is constantly improved, there always cases where the changes backfires and a query suffers from worse performance. That is just the fact of life when working with a complex product of this kind.

    Yes, I understand that you don't want to see these regressions - no one does - but I would argue that the onus is also on you. That is, well-written queries with well-supporting indexes are less likely to suffer regressions like monster queries like this one.


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

    quinta-feira, 20 de agosto de 2020 21:17