[Free] 2017(Aug) EnsurePass Examcollection Microsoft 70-762 Dumps with VCE and PDF 1-10

EnsurePass
2017 Aug Microsoft Official New Released 70-762
100% Free Download! 100% Pass Guaranteed!
http://www.EnsurePass.com/70-762.html

Developing SQL Databases

Question No: 1 DRAG DROP

You are analyzing the performance of a database environment.

Applications that access the database are experiencing locks that are held for a large amount of time. You are experiencing isolation phenomena such as dirty, nonrepeatable and phantom reads.

You need to identify the impact of specific transaction isolation levels on the concurrency and consistency of data.

What are the consistency and concurrency implications of each transaction isolation level? To answer, drag the appropriate isolation levels to the correct locations. Each isolation level may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

image

Answer:

image

Explanation:

image

Read Uncommitted (aka dirty read): A transaction T1executing under this isolation level can access data changed by concurrent transaction(s).

Pros:No read locks needed to read data (i.e. no reader/writer blocking). Note, T1 still takes transaction duration locks for any data modified.

Cons: Data is notguaranteed to be transactionally consistent.

Read Committed: A transaction T1 executing under thisisolation level can only access committed data.

Pros: Good compromise between concurrency and consistency.

Cons: Locking and blocking. The data can change when accessed multiple times within the same transaction.

Repeatable Read: A transaction T1 executing under this isolation level can only access committed data with an additional guarantee that any data read cannot change (i.e. it is repeatable) for the duration of the transaction.

Pros: Higher data consistency.

Cons: Locking and blocking. The S locks are held for the duration of the transaction that can lower the concurrency. It does not protect against phantom rows.

Serializable: A transaction T1 executing under this isolation level provides the highest data consistency including elimination of phantoms but at the cost of reduced concurrency. It prevents phantoms by taking a range lock or table level lock if range lock can’t be acquired (i.e. no index on the predicate column) for the duration of the transaction.

Pros: Full data consistency including phantom protection.

Cons: Locking and blocking. The S locks are held for the duration of the transaction that can lower the concurrency.

References:https://blogs.msdn.microsoft.com/sqlcat/2011/02/20/concurrency-series-basics- of-transaction-isolation-levels/

Question No: 2

You are developing an application that connects to a database. The application runs the following jobs:

image

The READ_COMMITTED_SNAPSHOT database option is set to OFF, and auto-content is set to ON. Within the stored procedures, no explicit transactions are defined.

If JobB starts before JobA, it can finish in seconds. If JobA starts first, JobB takes a long time to complete.

You need to use Microsoft SQL Server Profiler to determine whether the blocking that you observe in JobB is caused by locks acquired by JobA.

Which trace event class in the Locks event category should you use?

  1. LockAcquired

  2. LockCancel

  3. LockDeadlock

  4. LockEscalation

Answer: A Explanation:

The Lock:Acquiredevent class indicates that acquisition of a lock on a resource, such asa data page, has been achieved.

The Lock:Acquired and Lock:Released event classes can be used to monitor when objects are being locked, the typeof locks taken, and for how long the locks were retained. Locks retained for long periods of time may cause contention issues and should be investigated.

Question No: 3

Note: This question is part of a series of questions that use the same or similar answer choices. An Answer choice may be correct for more than one question in the series. Each question independent of the other questions in this series. Information and details provided in a question apply only to that question.

You are a database developer for a company. The company has a server that has multiple physical disks. The disks are not part of a RAID array. The server hosts three Microsoft SQL Server instances. There are many SQL jobs that run during off-peak hours.

You observe that many deadlocks appear to be happening during specific times of the day.

You need to monitor the SQL environment and capture the information about the processes that are causing the deadlocks.

What should you do?

  1. A. Create a sys.dm_os_waiting_tasks query.

  2. Create a sys.dm_exec_sessions query.

  3. Create a PerformanceMonitor Data Collector Set.

  4. Create a sys.dm_os_memory_objects query.

  5. Create a sp_configure ‘max server memory’ query.

  6. Create a SQL Profiler trace.

  7. Create a sys.dm_os_wait_stats query.

  8. Create an Extended Event.

Answer: F Explanation:

Toview deadlock information, the Database Engine provides monitoring tools in the form of two trace flags, and the deadlock graph event in SQL Server Profiler.

Trace Flag 1204 and Trace Flag 1222

When deadlocks occur, trace flag 1204 and trace flag 1222 return information that is captured in the SQL Server error log. Trace flag 1204 reports deadlock information formatted by each nodeinvolved in the deadlock. Trace flag 1222 formats deadlock information, first by processesand then by resources. It is possible to enable both trace flags to obtain two representations of the same deadlock event.

References:https://technet.microsoft.com/en-us/library/ms178104(v=sql.105).aspx

Question No: 4

You have a database that is experiencing deadlock issues when users run queries. You need to ensure that all deadlocks are recorded in XML format.

What should you do?

  1. Create a Microsoft SQL Server Integration Services package that uses sys.dm_tran_locks.

  2. Enable trace flag 1224 by using the Database Cpmsistency Checker(BDCC).

  3. Enable trace flag 1222 in thestartup options for Microsoft SQL Server.

  4. Use the Microsoft SQL Server Profiler Lock:Deadlock event class.

Answer: C Explanation:

When deadlocks occur, trace flag 1204 and trace flag 1222 return information that is capturedin the SQL Server error log.Trace flag 1204 reports deadlock information formatted by each node involved in the deadlock. Trace flag 1222 formats deadlock information, first by processes and then by resources.

The output format for Trace Flag 1222 only returns information in an XML-like format. References:https://technet.microsoft.com/en-us/library/ms178104(v=sql.105).aspx

Question No: 5 DRAG DROP

You have a trigger named CheckTriggerCreation that runs when a user attempts to create a trigger. The CheckTriggerCreation trigger was created with the ENCRYPTION option and additional proprietary business logic.

You need to prevent users from running the ALTER and DROP statements or the sp_tableoption stored procedure.

Which three Transact-SQL segments should you use to develop the solution? To answer, move the appropriate Transact-SQL segments from the list of Transact-SQL segments to the answer area and arrange them in the correct order.

image

Answer:

image

Question No: 6 HOTSPOT

You are developing an app that allows users to query historical company financial data. You are reviewing email messages from the various stakeholders for a project.

The message from the security officer is shown in the Security Officer Email exhibit below. TO: Database developer

From: Security Officer

Subject: SQL object requirements

We need to simplify the security settings for the SQL objects. Having a assign permissions at every object in SQL is tedious and leads to a problem. Documentation is also more difficult when we have to assign permissions at multiple levels. We need to assign the required permissions at one object, even though that object may be obtaining from other objects.

The message from the sales manager is shown in the Sales Manager Email exhibit below. TO: Database developer

From: Sales Manager Subject: Needed SQL objects

When creating objects for our use, they need to be flexible. We will be changing the base infrastructure frequently. We need components in SQL that will provide backward compatibility to our front end applications as the environments change so that do not need to modify the front end applications. We need objects that can provide a filtered set of the data. The data may be coming from multiple tables and we need an object that can provide access to all of the data through a single object reference.

This is an example of the types of data we need to be able to have queries against without having to change the front end applications.

image

The message from the web developer is shown in the Web Developer Email exhibit below.

TO: Database developer From: Web Developer

Subject: SQL Object component

Whatever you will be configuring to provide access to data in SQL, it needs to connect using the items referenced in this interface. We have been using this for a long time, and we cannot change this from end easily. Whatever objects are going to be used in SQL they must work using object types this interface references.

image

You need to create one or more objects that meet the needs of the security officer, the sales manager and the web developer.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

image

Answer:

image

Explanation:

image

  • Stored procedure: Yes

    A stored procecure to implement the following:

    Whatever you will be configuring to provide access to data in SQL, it needs to connect using the items referenced inthis interface. We have been using this for a long time, and we cannot changethis from end easily. Whatever objects are going to be used in SQL they must work using object types this interface references.

  • Trigger: No

    No requirements are related to actions taken when changing the data.

  • View: Yes

Because: We need objects that can provide a filtered set of the data. The data may be coming from multiple tables and we need an object that can provide access to all of the data through a single object reference.

Question No: 7 DRAG DROP

You are analyzing the performance of a database environment.

You suspect there are several missing indexes in the current database.

You need to return a prioritized list of the missing indexes on the current database.

How should you complete the Transact-SQL statement? To answer, drag the appropriate Transact-SQL segments to the correct locations. Each Transact-SQL segment may be used once, more than once or not at all. You may need to drag the split bar between panes or scroll to view content.

image

Answer:

image

Explanation:

image

Box 1: sys.db_db_missing_index_group_stats

The sys.db_db_missing_index_group_stats table include the required columns for the main query: avg_total_user_cost, avg_user_impact, user_seeks, and user scans.

Box 2: group_handle

Example: The following query determines which missing indexes comprise a particular missing index group, and displays their column details. For the sake of this example, the missing index group handle is 24.

SELECT migs.group_handle, mid.*

FROM sys.dm_db_missing_index_group_stats AS migs INNER JOIN sys.dm_db_missing_index_groups AS mig ON (migs.group_handle = mig.index_group_handle) INNER JOIN sys.dm_db_missing_index_details AS mid ON (mig.index_handle = mid.index_handle)

WHERE migs.group_handle = 24;

Box 3: sys.db_db_missing_index_group_stats

The sys.db_db_missing_index_group_stats table include the required columns for the subquery: avg_total_user_cost and avg_user_impact.

Example: Find the 10 missing indexes with the highest anticipated improvement for user queries

The following query determines which 10 missing indexes would produce the highest anticipated cumulative improvement, in descending order, for user queries.

SELECT TOP 10 *

FROM sys.dm_db_missing_index_group_stats

ORDER BY avg_total_user_cost * avg_user_impact * (user_seeks user_scans)DESC;

Question No: 8 DRAG DROP

You are evaluating the performance of a database environment.

You must avoid unnecessary locks and ensure that lost updates do not occur. You need to choose the transaction isolation level for each data scenario.

Which isolation level should you use for each scenario? To answer, drag the appropriate isolation levels to the correct scenarios. Each isolation may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

image

Answer:

image

Explanation:

image

Box 1: Readcommitted

Read Committed: A transaction T1 executing under this isolation level can only access committed data.

Pros: Good compromise between concurrency and consistency.

Cons: Locking and blocking. The data can change when accessed multiple times within the same transaction.

Box 2: Read Uncommitted

Read Uncommitted (aka dirty read): A transaction T1 executing under this isolation level can access data changed by concurrent transaction(s).

Pros: No read locks needed to readdata (i.e. no reader/writer blocking). Note, T1 still takes transaction duration locks for any data modified.

Cons: Data is not guaranteed to be transactionally consistent.

Box 3: Serializable

Serializable: A transaction T1 executing under this isolation level provides the highest data consistency including elimination of phantoms but at the cost of reduced concurrency. It prevents phantoms by taking a range lock or table level lock if range lock can’t be acquired (i.e. no index on the predicate column) for the duration of the transaction.

Pros: Full data consistency including phantom protection.

Cons: Locking and blocking. The S locks are held for the duration of the transaction that can lower the concurrency.

References:https://blogs.msdn.microsoft.com/sqlcat/2011/02/20/concurrency-series-basics- of-transaction-isolation-levels/

Question No: 9 DRAG DROP

You are monitoring a Microsoft Azure SQL Database. The database is experiencing high CPU consumption.

You need to determine which query uses the most cumulative CPU.

How should you complete the Transact-SQL statement? To answer, drag the appropriate Transact-SQL segments to the correct locations. Each Transact-SQL segment may be used once, more than one or not at all. You may need to drag the split bar between panes or scroll to view content.

image

Answer:

image

Explanation:

image

Box 1: sys.dm_exec_query_stats

sys.dm_exec_query_stats returns aggregateperformance statistics for cached query plans in SQL Server.

Box 2: highest_cpu_queries.total_worker_time DESC Sort ontotal_worker_time column

Example: The following example returns information about the top five queries ranked by average CPU time.

Thisexample aggregates the queries according to their query hash so that logically equivalentqueries are grouped by their cumulative resource consumption.

USE AdventureWorks2012; GO

SELECT TOP 5 query_stats.query_hash AS quot;Query Hashquot;, SUM(query_stats.total_worker_time) / SUM(query_stats.execution_count) AS quot;Avg CPU Timequot;,

MIN(query_stats.statement_text) AS quot;Statement Textquot; FROM

(SELECT QS.*,

SUBSTRING(ST.text, (QS.statement_start_offset/2) 1,

((CASE statement_end_offset

WHEN -1 THEN DATALENGTH(ST.text)

ELSE QS.statement_end_offset END

  • QS.statement_start_offset)/2) 1) AS statement_text FROM sys.dm_exec_query_stats AS QS

    CROSS APPLY sys.dm_exec_sql_text(QS.sql_handle)as ST) as query_stats GROUP BY query_stats.query_hash

    ORDER BY 2 DESC;

    References: https://msdn.microsoft.com/en-us/library/ms189741.aspx

    Question No: 10

    Note: The question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other question in the series. Information and details provided in a question apply only to that question.

    You have a reporting database that includes a non-partitioned fact table named Fact_Sales. The table is persisted on disk.

    Users report that their queries take a long time to complete. The system administrator reports that the table takes too much space in the database. You observe that there are no indexes defined on the table, and many columns have repeating values.

    You need to create the most efficient index on the table, minimize disk storage and improve reporting query performance.

    What should you do?

    1. Create a clustered indexon the table.

    2. Create a nonclustered index on the table.

    3. Create a nonclustered filtered index on the table.

    4. Create a clustered columnstore index on the table.

    5. Create a nonclustered columnstore index on the table.

    6. Create a hash index on thetable.

      Answer: D Explanation:

      The columnstore index is the standard for storing and querying largedata warehousing fact tables. It uses column-based data storage and query processing to achieve up to 10x query performance gains in your data warehouse overtraditional row-oriented storage, and up to 10x data compression over the uncompressed data size.

      A clustered columnstore index is the physical storage for the entire table.

      100% Free Download!
      Download Free Demo:70-762 Demo PDF
      100% Pass Guaranteed!
      Download 2017 EnsurePass 70-762 Full Exam PDF and VCE
      Get 10% off your purchase! Copy it:TJDN-947R-9CCD [2017.07.01-2017.07.31]

      EnsurePass ExamCollection Testking
      Lowest Price Guarantee Yes No No
      Up-to-Dated Yes No No
      Real Questions Yes No No
      Explanation Yes No No
      PDF VCE Yes No No
      Free VCE Simulator Yes No No
      Instant Download Yes No No

      2017 EnsurePass IT Certification PDF and VCE