What step must you take to address this situation?

A customer needs to ensure that the number of threads servicing an application does not exceed
the number of database connections available to the application.

What step must you take to address this situation?

A customer needs to ensure that the number of threads servicing an application does not exceed
the number of database connections available to the application.

What step must you take to address this situation?

A.
Configure a Max Threads Constraint and add your application to the list of applications for the
Constraint.

B.
Configure a Work Manager with a Maximum Threads Constraint tied to the Connection Pool
and configuration your application to use the Work Manager.

C.
Configure a Work Manager with a Minimum Threads Constraint tied to the Connection Pool and
configure your application to use the Work Manager.

D.
Configure a global MaxThreads constraint and target it to the server or clusters where your
application is deployed.

E.
Configure the startup parameter “-Dwls-maxThreads” to be the same as the number of
database connections configured.

Explanation:
To manage work in your applications, you define one or more of the following Work
Manager components:
Fair Share Request Class:
Response Time Request Class:
Min Threads Constraint:
Max Threads Constraint:
Capacity Constraint
Context Request Class:
Note:
* max-threads-constraint—This constraint limits the number of concurrent threads executing
requests from the constrained work set. The default is unlimited. For example, consider a
constraint defined with maximum threads of 10 and shared by 3 entry points. The scheduling logic
ensures that not more than 10 threads are executing requests from the three entry points
combined.
A max-threads-constraint can be defined in terms of a the availability of resource that requests
depend upon, such as a connection pool.
A max-threads-constraint might, but does not necessarily, prevent a request class from taking its
fair share of threads or meeting its response time goal. Once the constraint is reached the server
does not schedule requests of this type until the number of concurrent executions falls below the
limit. The server then schedules work based on the fair share or response time goal.
* WebLogic Server prioritizes work and allocates threads based on an execution model that takes
into account administrator-defined parameters and actual run-time performance and throughput.

Administrators can configure a set of scheduling guidelines and associate them with one or more
applications, or with particular application components.
* WebLogic Server uses a single thread pool, in which all types of work are executed. WebLogic
Server prioritizes work based on rules you define, and run-time metrics, including the actual time it
takes to execute a request and the rate at which requests are entering and leaving the pool.
The common thread pool changes its size automatically to maximize throughput. The queue
monitors throughput over time and based on history, determines whether to adjust the thread
count. For example, if historical throughput statistics indicate that a higher thread count increased
throughput, WebLogic increases the thread count. Similarly, if statistics indicate that fewer threads
did not reduce throughput, WebLogic decreases the thread count. This new strategy makes it
easier for administrators to allocate processing resources and manage performance, avoiding the
effort and complexity involved in configuring, monitoring, and tuning custom executes queues.
Reference: Using Work Managers to Optimize Scheduled Work



Leave a Reply 0

Your email address will not be published. Required fields are marked *