Many thanks to Robin and Christophe.
Some changes we made were:
Update Server Spec
- Based on Capita’s recommended setup, the number of allocated CPU cores and RAM were both below the minimum spec on our server
- We increased the spec and are now substantially over the spec
- We run the database on a VMWare server and Capita sent us a best practices guide to follow and we checked this but think everything was ok there
- We have moved the database server to an Intel server as these work more efficiently with SQL Server compared with AMD chips of the same spec
Remove all scalar valued functions from our code and use only table valued functions in code
- After I submitted my findings to Capita they added a utility to update all RegGens to remove scalar value code from them and have moved some of their code to table valued functions but I believe this is not yet fully complete
- A scalar valued function cannot be used in the query optimiser so the code is not optimised and is run once for each output line
- https://www.brentozar.com/blitz/blitz-r ... ql-server/
- https://www.brentozar.com/archive/2014/ ... ing-query/
- https://www.brentozar.com/archive/2016/ ... -bad-idea/
We have previously advised customers of the “To Text To Convert” tool in Report Generator which colleges should run to migrate most occurrences of the scalar function “To Text” into their equivalent SQLServer native functions “Convert”, “Datename” and “Datepart” as required depending on the format parameters used. This would benefit Columbus users in particular.
NB: Some VB tools still require the Totext function to be used to export data into MDB’s successfully. In particular this relates to Curriculum Balancing and the Scottish FES reports so these reports must be locked so that they do not get altered by running the To Text to Convert tool.
Added clustered indexes to all main UNIT-e tables we query
- After I submitted my findings to Capita they added clustered indexes to all UNIT-e tables in the next major upgrade
- A table without a clustered index is basically fragmented (referred to as a heap) so operations such as sorting data are very expensive for the system whilst inserts may be marginally faster
- https://www.brentozar.com/blitz/heaps-t ... red-index/
Split the TempDB database into multiple files
- We discovered that we could achieve better performance by splitting the TempDB database into multiple equally sized files to increase performance
- I think Capita might have said they would update their documentation to suggest this but not sure if they did
We are adding reference to this generic best practice advice for SQL Server installations into the next version of the UNIT-e Infrastructure document (4.35.500)
Re-write RegGen created SQL avoiding data conversion issues and use parameters
- After I submitted my evidence to Capita they have been working to make RepGen use parameterised queries instead although the SQL generated is still no better
- We discovered that a lot of RepGen created code was creating slow queries due to the way it structured the table joins etc. such as un-needed grouping and returning 100% of the query with mixed up right, inner and left joins in the query and conditions being repeated multiple times. Also makes it very hard to debug the query and you must re-format it first to make it readable
- Code was also full of instances of code being cast from a date to a string and then further sub-string commands which are very expensive
A recent software notice on TORS and extended classes (3 October) provided some of the detail to the background of this issue and to the first parts of our work in this area.
- Use sp_whoisactive when system is slow to determine what process is causing the slowdown
- We can use this and then kill crashed processes to stop them taking the system down
- We also re-write expensive queries to make them more efficient. Use of temporary tables can help and reduction of extra groupings and sorts we do not need. Many queries were over-complex.
- https://www.brentozar.com/responder/log ... o-a-table/