you need to deal with temporary data in the most effici…

You use SQL Server 2014. The physical server is a dedicated database server that has 120GB of RAM
available. There is approximately 50GB of storage space available on a slow local disk. You create a new
stored procedure. You decide you need to temporarily hold approximately 300,000 rows from two tables, from
which you will compute two complex business scores.
The stored procedure will use temporary storage defined as follows:

The code will make several passes through the data, applying complex calculations before writing the data to a
permanent disk-based table in the same database from which it reads the data.
For this stored procedure, you need to deal with temporary data in the most efficient way to minimize physical
disk pressure.
What should you do? More than one answer choice may achieve the goal. Select the BEST answer.

You use SQL Server 2014. The physical server is a dedicated database server that has 120GB of RAM
available. There is approximately 50GB of storage space available on a slow local disk. You create a new
stored procedure. You decide you need to temporarily hold approximately 300,000 rows from two tables, from
which you will compute two complex business scores.
The stored procedure will use temporary storage defined as follows:

The code will make several passes through the data, applying complex calculations before writing the data to a
permanent disk-based table in the same database from which it reads the data.
For this stored procedure, you need to deal with temporary data in the most efficient way to minimize physical
disk pressure.
What should you do? More than one answer choice may achieve the goal. Select the BEST answer.

A.
Option A

B.
Option B

C.
Option C

D.
Option D

Explanation:
* You must specify a value for the BUCKET_COUNT parameter when you create the memory- optimized table.
In most cases the bucket count should be between 1 and 2 times the number of distinct values in the index key.
If the index key contains a lot of duplicate values, on average there are more than 10 rows for each index key
value, use a nonclustered index instead You may not always be able to predict how many values a particular
index key may have or will have. Performance should be acceptable if the BUCKET_COUNT value is within 5
times of the actual number of key values.



Leave a Reply 0

Your email address will not be published. Required fields are marked *